Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

398 results about "Histogram of oriented gradients" patented technology

The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy.

Multi-target pedestrian detecting and tracking method in monitoring video

The invention discloses a multi-target pedestrian detecting and tracking method in monitoring video, comprising the steps that a target detection network based on deep learning is adopted for detecting a first frame of pedestrian image, and an initial rectangular area having one or a plurality of corresponding pedestrian targets can be obtained; based on the initial target area information, the Histogram of oriented gradients feature of a target can be extracted, and kernel function autocorrelation calculating of Fourier expansion domain can be conducted, and the tracking model is initializedbased on the calculating result; based on the target area information of the tracking model, a multi-dimensional construction of a pyramid will be carried out from the second frame of pedestrian image, and the extracting of the Histogram of oriented gradients feature matrix and the kernel function autocorrelation calculating of Fourier expansion domain can be conducted on each scale of the pedestrian rectangular area; the returned check condition is determined, and the identity re-verification and the updating of the tracking model can be conducted on the pedestrian target having returned check. The invention is advantageous in that the problem of drifting models can be resolved; a more accurate pedestrian moving track can be obtained; real-time performance is good.
Owner:SOUTH CHINA UNIV OF TECH

Full-view monitoring robot system and monitoring robot

The invention discloses a full-view monitoring robot system, which comprises a monitoring robot, a wireless interaction unit and a remote monitoring terminal, wherein the monitoring robot comprises a robot housing, an image acquisition unit, a sensing unit, a processor and a moving unit; the image acquisition unit comprises a plurality of cameras which surround the robot housing at intervals for acquiring all-around images on the four sides of the monitoring robot; the sensing unit comprises a sensor network on the robot housing; the processor comprises an image detection unit and a motion controller, wherein the image detection unit extracts characteristics of a directional gradient column diagram from the images acquired by the image acquisition unit, classifies linearly supported vector machine, detects human body images according to the classification result and generates a control command when the human body images are detected; and the motion controller receives the control command and controls the travel unit to travel according to the control command. The system can perform 360 degree full-view monitoring and improve monitoring efficiency. Besides, the invention also provides a monitoring robot for use in the full-view monitoring robot system.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI

Dressing safety detection method for worker on working site of electric power facility

The invention discloses a dressing safety detection method for a worker on a working site of an electric power facility. An SVM (support vector machine) classifier is trained based on HOG (histogram of oriented gradients) characteristics to identify the worker on the working site of the electric power facility and judge whether the worker is neatly dressed or not based on a worker identification result. The method comprises the following steps of detecting a worker target appearing on the working site of the electric power facility by training a HOG-characteristic-based classifier, and judging whether dressing and equipment of the worker target meet safety requirements on the working site or not based on the identified worker target, mainly comprising safety items such as whether a helmet is worn or not, whether safety clothes are completely worn (without exposed skin) or not and whether the worker on a pole transformer correctly wears a safety belt or not. According to the method, the dressing of the worker can be detected in advance before the worker enters the working site, and an additional worker for supervision is not required to be deployed; in addition, if the dressing of the worker is inconsistent with norms, the worker is early-warned and prompted, so that safety accidents caused by nonstandard dressing are avoided, and potential safety hazards are eliminated.
Owner:STATE GRID CORP OF CHINA +6

Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)

The invention discloses a target tracking method based on correlation filtering and color histogram statistics and an ADAS (Advanced Driving Assistance System). The target tracking method comprises the steps of extracting HOG (Histogram of Oriented Gradients) features and color histogram statistical information of a target region, and generating an initial tracker; extracting HOG features and color histogram statistical information of the next image frame according to the initial tracker, and performing convolution operation on a feature image by using a current filter h<t> to acquire a template response value f<tmpl>(x); extracting the color histogram statistical information of the image frame, and calculating a histogram response value f<hist>(x) by using a current color histogram weight vector beta<t>; fusing the template response value f<tmpl>(x) and the histogram response value f<hist>(x) to acquire a final response value f(x) of a target, and performing target detection and positioning on the target according to the final response value f(x). According to the invention, the complementarity of two tracking algorithms is utilized, the speed and the accuracy of a tracking algorithm can be simultaneously ensured, a condition of drifting of the tracking target is greatly reduced, and the target tracking method has good application prospects in a driving assistance system.
Owner:开易(北京)科技有限公司

Human face age estimation method based on fusion of deep characteristics and shallow characteristics

The invention discloses a human face age estimation method based on the fusion of deep characteristics and shallow characteristics. The method comprises the following steps that: preprocessing each human face sample image in a human face sample dataset; training a constructed initial convolutional neural network, and selecting a convolutional neural network used for human face recognition; utilizing a human face dataset with an age tag value to carry out fine tuning processing on the selected convolutional neural network, and obtaining a plurality of convolutional neural networks used for age estimation; carrying out extraction to obtain multi-level age characteristics corresponding to the human face, and outputting the multi-level age characteristics as the deep characteristics; extracting the HOG (Histogram of Oriented Gradient) characteristic and the LBP (Local Binary Pattern) characteristic of the shallow characteristics of each human face image; constructing a deep belief network to carry out fusion on the deep characteristics and the shallow characteristics; and according to the fused characteristics in the deep belief network, carrying out the age regression estimation of the human face image to obtain an output an age estimation result. By sue of the method, age estimation accuracy is improved, and the method owns a human face image age estimation capability with high accuracy.
Owner:NANJING UNIV OF POSTS & TELECOMM

Method and system for automatically tracking moving pedestrian video based on particle filtering

The invention discloses a method and a system for automatically tracking a moving pedestrian video based on particle filtering. The method comprises the following steps of: inputting one frame of images, and carrying out detection through an HOG (Histogram of Oriented Gradient) feature vector set and an SVM (Support Vector Machine) vector machine; in order to realize particle filtering tracking based on double HOG and color features, firstly obtaining an initial rectangular area of target pedestrian, sampling a plurality of particles from a target rectangular area, extracting an HOG feature and a color feature, computing the weight of the particles after the double HOG and color features are fused, obtaining the final state estimation through a minimum mean square error estimator, outputting an estimation target and then resampling; and closely locking the tracked target pedestrian. The method extracts the double HOG and color features to increase the robustness of a particle filtering likelihood model and eliminate the unstable situation in the tracking process, the method combines the HOG feature to build the better likelihood model through a fusion strategy of weighted mean, the robustness of the tracking algorithm is greatly increased, and the stable tracking is completed.
Owner:SUZHOU UNIV

Plant disease and pest detection method based on SVM (support vector machine) learning

InactiveCN102915446AImprove efficiencyRealize continuous computingCharacter and pattern recognitionDiseaseFeature vector
The invention belongs to the technical field of digital image processing and pattern recognition and particularly relates to a plant disease and pest detection method based on SVM (support vector machine) learning. The plant disease and pest detection method comprises the following steps: acquiring a large number of regularly grown plant leaves and the plant leaves with diseases and pests from a large number of monitoring videos of agricultural scenes; extracting a part of pictures of the regularly grown plant leaves and the plant leaves with diseases and pests as samples; extracting characteristics [including color characteristic, HSV (herpes simplex virus) characteristic, edge characteristic and HOG (histogram of oriented gradient) characteristic] of each leaf picture; combining the characteristics into characteristic vectors; training the characteristic vectors of each leaf picture through an SVM learning method; forming a classifier after training; and detecting the large number of plant leaf pictures by the classifier to detect whether diseases or pests occur to the plant leaves. Compared with a biologic plant disease and pest detection method, the plant disease and pest detection method based on the SVM learning is higher in real time and easier to implement; and the shortcoming of detecting the plant diseases and pests by working in the fields is overcome.
Owner:FUDAN UNIV

Road traffic sign identification method with multiple-camera integration based on DS evidence theory

The present invention relates to a road traffic sign identification method with multiple-camera integration based on the DS evidence theory, belonging to the technical field of image processing. According to the method, five types of road traffic indication signs which are going straight, turning left, turning right, going straight and turning left, and going straight and turning right are mainly identified, and the method is divided into two parts which are training and testing. In a training stage, the direction gradient histogram feature of a training sample is extracted, thus a sample characteristic and a category label are introduced into a support vector machine to carry out classification training. In a testing stage, an interested region is obtained through image pre-processing, the direction gradient histogram feature of the interested region is extracted and is sent into a classifier to carry out classification, according to the credibility of the sign to be identified obtained by the classifier belonging to each category, and combined with a DS evidence theory data integration method and a maximum credibility decision rule, a final sign identification result is determined. According to the invention, a multiple-camera data integration method based on the DS evidence theory is employed, the information of multiple cameras are integrated to obtain a final identification result, and the road traffic signs can be stably and efficiently identified.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Agricultural pest image recognition method based on multi-feature deep learning technology

The invention relates to an agricultural pest image recognition method based on a multi-feature deep learning technology. In comparison with the prior art, a defect of poor pest image recognition performance under the complex environment condition is solved. The method comprises the following steps of carrying out multi-feature extraction on large-scale pest image samples and extracting color features, texture features, shape features, scale-invariant feature conversion features and directional gradient histogram features of the large-scale pest image samples; carrying out multi-feature deep learning and respectively carrying out unsupervised dictionary training on different types of features to obtain sparse representation of the different types of features; carrying out multi-feature representation on training samples and constructing a multi-feature representation form-multi-feature sparse coding histogram for the pest image samples through combining different types of features; and constructing a multi-core learning classifier and constructing a multi-core classifier through learning a sparse coding histogram for positive and negative pest image samples to classify pest images. According to the method, the accuracy for pest recognition is improved.
Owner:HEFEI INSTITUTES OF PHYSICAL SCIENCE - CHINESE ACAD OF SCI

Fast adaptation method for traffic video monitoring target detection based on machine vision

InactiveCN103208008AShorten the training processRobust and accurate object detection resultsCharacter and pattern recognitionVideo monitoringHistogram of oriented gradients
The invention belongs to the field of machine vision and intelligent control for achieving fast self-adaptation of traffic video monitoring target detection. The method comprises the steps of building an initial training sample bank; training an AdaBoost classifier based on Haar characteristics and a support vector machine (SVM) classifier based on histograms of oriented gradients (HOG) characteristics respectively; and detecting monitoring images frame by frame by employing of the two classifiers, wherein the detection process is divided into steps of predicting of sub-images in a detection frame through the two classifiers respectively, performing of confidence-degree determination on predicted results, adding of prediction labels corresponding to large confidence-degree and the sub-images into additional training sample banks of the classifiers corresponding to small confidence-degree till the size of the detection frame reaches half the sizes of detected images, retraining of the two classifiers by using the updated training sample banks and detecting of a next frame of image till all images are detected, and the final classifiers can be used for detecting targets of vehicles, pedestrians and the like in actual traffic scenes.
Owner:HUNAN HUANAN OPTO ELECTRO SCI TECH CO LTD

A multi-layer convolution feature self-adaptive fusion moving target tracking method

The invention relates to a multi-layer convolution feature self-adaptive fusion moving target tracking method, and belongs to the field of computer vision. The method comprises the following steps: firstly, initializing a target area in a first frame of image, and utilizing a trained deep network framework VGG-19 to extract first and fifth layers of convolution features of the target image block,and obtaining two templates through learning and training of a related filter; Secondly, extracting features of a detection sample from the prediction position and the scale size of the next frame andthe previous frame of target, and carrying out convolution on the features of the detection sample and the two templates of the previous frame to obtain a response graph of the two-layer features; calculating the weight of the obtained response graph according to an APCE measurement method, and adaptively weighting and fusing the response graph to determine the final position of the target; And after the position is determined, estimating the target optimal scale by extracting the directional gradient histogram features of the multiple scales of the target. According to the method, the targetis positioned more accurately, and the tracking precision is improved.
Owner:KUNMING UNIV OF SCI & TECH

Three-dimensional gesture action recognition method based on depth images

The invention provides a three-dimensional gesture action recognition method based on depth images. The three-dimensional gesture action recognition method comprises the steps of acquiring the depth images including gesture actions; dividing a human body region corresponding to the gesture actions from the images through tracking and positioning based on quick template tracking and oblique plane matching to obtain a depth image sequence after the background is removed; extracting useful frames of the gesture actions according to the depth images after the background is removed; calculating three-view drawing movement historical images of the gesture actions in the front-view, top-view and side-view projection directions according to the extracted useful frames; extracting direction gradient histogram features corresponding to the three-view drawing movement historical images; calculating relevance of combination features of the obtained gesture actions and gesture action templates stored in a pre-defined gesture action library; using a template with largest relevance as a recognition result of a current gesture action. Therefore, three-dimensional gesture action recognition can be achieved by adopting the three-dimensional gesture action recognition method, and the three-dimensional gesture action recognition method can be applied to recognition of the movement process of simple objects.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Traveling vehicle vision detection method combining laser point cloud data

ActiveCN110175576AAvoid the problem of difficult access to spatial geometric informationRealize 3D detectionImage enhancementImage analysisHistogram of oriented gradientsVehicle detection
The invention discloses a traveling vehicle vision detection method combining laser point cloud data, belongs to the field of unmanned driving, and solves the problems in vehicle detection with a laser radar as a core in the prior art. The method comprises the following steps: firstly, completing combined calibration of a laser radar and a camera, and then performing time alignment; calculating anoptical flow grey-scale map between two adjacent frames in the calibrated video data, and performing motion segmentation based on the optical flow grey-scale map to obtain a motion region, namely a candidate region; searching point cloud data corresponding to the vehicle in a conical space corresponding to the candidate area based on the point cloud data after time alignment corresponding to eachframe of image to obtain a three-dimensional bounding box of the moving object; based on the candidate region, extracting a direction gradient histogram feature from each frame of image; extracting features of the point cloud data in the three-dimensional bounding box; and based on a genetic algorithm, carrying out feature level fusion on the obtained features, and classifying the motion areas after fusion to obtain a final driving vehicle detection result. The method is used for visual inspection of the driving vehicle.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Video copy detection method based on multi-feature Hash

The invention discloses a video copy detection method based on multi-feature Hash, which mainly solves the problem that detection efficiency and detection accuracy cannot be effectively balanced in the exiting video copy detection algorithm. The video copy detection method based on multi-feature Hash comprises the following realization steps of: (1) extracting the pyramid histogram of oriented gradients (PHOG) of a key frame as the global feature of the key frame; (2) extracting a weighted contrast histogram based on scale invariant feature transform (SIFT) of the key frame as the local feature of the key frame; (3) establishing a target function by a similarity-preserving multi-feature Hash learning SPM2H algorithm, and obtaining L Hash functions by optimization solution; (4) mapping the key frame of a database video and the key frame of an inquired video into an L-dimensional Hash code by virtue of the L Hash functions; (5) judging whether the inquired video is the copied video or not through feature matching. The video copy detection method based on multi-feature Hash disclosed by the invention is good in robustness for multiple attacks, and capable of being used for copyright protection, copy control and data mining for digital videos on the Internet.
Owner:XIDIAN UNIV

Method and system for detecting pedestrian in front of vehicle

The invention discloses a method and a system for detecting the pedestrian in front of a vehicle. The method comprises the steps of image acquisition and preprocessing, image scaling, LBP (Local Binary Pattern) and HOG (Histogram of Oriented Gradient) feature extraction, region of interest extraction, target identification, and target fusion and early warning. A driver is reminded timely at the presence of the pedestrian in front of the vehicle. The system for detecting the pedestrian in front of the vehicle comprises three portions which are an image acquisition unit, an SOPC (System on Programmable Chip) unit and an ASIC (Application Specific Integrated Circuit) unit, wherein the image acquisition unit is a camera unit, the SOPC unit comprises an image preprocessing unit, a region of interest extraction unit, a target identification unit, and a target fusion and early warning unit, and the ASIC unit comprises an image scaling unit, an LBP feature extraction unit and an HOG feature extraction unit. According to the invention, LBP features and HOG features are used in a joint manner, and two-level detection improves the accuracy of pedestrian detection on the whole; and HOG feature extraction is dynamically adjusted according to classification conditions of an LBP based SVM (Support Vector Machine), the calculation amount is reduced, the calculation speed is improved, and the driving safety of the vehicle is improved.
Owner:SHANGHAI UNIV

Video moving target classification and identification method based on outline constraint

The invention provides a video moving target classification and identification method based on outline constraint. The video moving target classification and identification method includes the steps: (1) obtaining a realistic target region and a target outline through a level set partitioning algorithm which is based on color features, textural features and shape prior constraint; (2) conducting convolution operation on the realistic target region through Gaussian filter and obtaining space detail constituent of the target; (3) extracting a local binary pattern histogram of the space detail constituent and obtaining the textural features of the target; (4) extracting a directional gradient histogram of an outline constraint local region in the realistic target region and obtaining the edge gradient features of the target; (5) extracting the texture features and the edge gradient features of a training sample target, training the texture features and the edge gradient features of the training sample target through a machine learning method, obtaining a target classification model; and (6) extracting the texture features and the edge gradient features of a to-be-identified target, inputting the classification model and confirming the type of the target. By means of the video moving target classification and identification method based on outline constraint, classification accuracy under complex outdoor conditions is improved.
Owner:BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products