Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

482 results about "Local binary patterns" patented technology

Local binary patterns (LBP) is a type of visual descriptor used for classification in computer vision. LBP is the particular case of the Texture Spectrum model proposed in 1990. LBP was first described in 1994. It has since been found to be a powerful feature for texture classification; it has further been determined that when LBP is combined with the Histogram of oriented gradients (HOG) descriptor, it improves the detection performance considerably on some datasets. A comparison of several improvements of the original LBP in the field of background subtraction was made in 2015 by Silva et al. A full survey of the different versions of LBP can be found in Bouwmans et al.

Face detecting and tracking method and device

InactiveCN103116756ASolve the problem of susceptibility to light intensityConform to the visual characteristicsCharacter and pattern recognitionFace detectionTrack algorithm
The invention provides a face detecting and tracking method and a device. The method comprises the steps of inputting a face image or a face video, preprocessing the face image or the face video in an illumination mode, detecting a face by usage of an Ada Boost algorithm, confirming an initial position of the face, and tracking the face by the usage of a Mean Shift algorithm. According to the face detecting and tracking method and the device, a self-adaptation local contrast enhancement method is provided to enhance image detail information in the period of image preprocessing, in order to increase robustness under different illumination conditions, face front samples under different illumination are added to training samples and accuracy of the face detection is increased by adoption of the Ada Boost algorithm in the period of face detection, in order to overcome the defect that using color of the Mean Shift algorithm is single, grads features and local binary pattern length between perpendiculars (LBP) vein features are integrated by adoption of the Mean Shift tracking algorithm in the period of face tracking, wherein the LBP vein features further considers using LBP local variance for expressing change of image contrast information, and accuracy of the face detection and the face tracking is improved.
Owner:BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY

Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning

The invention discloses a newborn-painful-expression recognition method based on dual-channel-characteristic deep learning. The newborn-painful-expression recognition method includes the steps that firstly, newborn facial images are grayed, and a Local Binary Pattern (LBP) specific chromatogram is extracted; secondly, grayscale images of the parallelly-input newborn facial images and the characteristics of two channels of LBP characteristic images of the grayscale images are deeply learned with a dual-channel convolutional neural network; finally, the fusion characteristics of the two channels are subjected to expression classification through a classifier based on a softmax, and expressions are divided into the calmness expression, the crying expression, the mild pain expression and the acute pain expression. According to the newborn-painful-expression recognition method, the grayscale images and the characteristic information of the two channels of the LBP characteristic images of the grayscale images are combined, the expressions such as the calmness expression, the crying expression, the mild pain expression and the acute pain expression can be effectively recognized, the quite-good robustness of the illumination problem, the noise problem and the shielding problem of the newborn facial images is achieved, and a new method and way are provided for developing a newborn-painful-expression recognition system.
Owner:NANJING UNIV OF POSTS & TELECOMM

Human face age estimation method based on fusion of deep characteristics and shallow characteristics

The invention discloses a human face age estimation method based on the fusion of deep characteristics and shallow characteristics. The method comprises the following steps that: preprocessing each human face sample image in a human face sample dataset; training a constructed initial convolutional neural network, and selecting a convolutional neural network used for human face recognition; utilizing a human face dataset with an age tag value to carry out fine tuning processing on the selected convolutional neural network, and obtaining a plurality of convolutional neural networks used for age estimation; carrying out extraction to obtain multi-level age characteristics corresponding to the human face, and outputting the multi-level age characteristics as the deep characteristics; extracting the HOG (Histogram of Oriented Gradient) characteristic and the LBP (Local Binary Pattern) characteristic of the shallow characteristics of each human face image; constructing a deep belief network to carry out fusion on the deep characteristics and the shallow characteristics; and according to the fused characteristics in the deep belief network, carrying out the age regression estimation of the human face image to obtain an output an age estimation result. By sue of the method, age estimation accuracy is improved, and the method owns a human face image age estimation capability with high accuracy.
Owner:NANJING UNIV OF POSTS & TELECOMM

Method for recognizing iris with matched characteristic and graph based on partial bianry mode

The invention discloses an iris recognition method based on local two-value mode feature and graph matching. Firstly local two-value mode code is picked up according to the ordinal relation of each two pixel grey-scale values in an iris image neighborhood for describing the iris oriented texture feature having a constant characteristic under light irradiation; secondly the iris image is divided into a plurality of image pieces, and a local two-value mode histogram in each piece is calculated to describe the iris oriented texture statistical characterization having robustness on translation and deformation; each image piece is viewed as a node, the local two-value mode histogram in each piece is viewed as the attribute of the node, and the feature of each iris image is presented as a graph mode; during iris recognition, the graph matching method is used for searching matching node pairs in two graph modes; image recognition and registration of the number of the matching node pairs in the graph modes are carried out to show the similarity between the two graph mode, thereby judging the ID of a user. The invention is used for automatic identification in the application fields such as entrance guard, attendance check and clearance.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion

The invention requests to protect an identification method for a human facial expression based on two-step dimensionality reduction and parallel feature fusion. The adopted two-step dimensionality method comprises the following steps: firstly, respectively performing the first-time dimensionality reduction on two kinds of human facial expression features to be fused in the real number field by using a principal component analysis (PCA) method, then performing the parallel feature fusion on the features subjected to dimensionality reduction in a unitary space, secondly, providing a hybrid discriminant analysis (HDA) method based on the unitary space as a feature dimensionality reduction method of the unitary space, respectively extracting two kinds of features of a local binary pattern (LBP) and a Gabor wavelet, combining dimensionality reduction frameworks in two steps, and finally, classifying and training by adopting a support vector machine (SVM). According to the method, the dimensions of the parallel fusion features can be effectively reduced; besides, the identification for six kinds of human facial expressions is realized and the identification rate is effectively improved; the defects existing in the identification method for serial feature fusion and single feature expression can be avoided; the method can be widely applied to the fields of mode identification such as safe video monitoring of public places, safe driving monitoring of vehicles, psychological study and medical monitoring.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Dese population estimation method and system based on multi-feature fusion

The invention provides a dense population estimation method and a system based on multi-feature fusion. The method comprises the following steps: partitioning an image into N equal sub-blocks; performing hierarchical background modeling on the image by using a method based on a CSLBP (Center-Symmetric Local Binary Pattern) histogram texture model and mixture Gaussian background modeling, extracting the foreground area of each sub-block subjected to perspective correction, detecting the edge density of each sub-block in combination with an improved Sobel edge detection operator, and extracting four important texture feature vectors in different directions for describing image texture features in combination with CSLBP transform and a gray-level co-occurrence matrix; performing dimension reduction processing on the extracted population foreground partition feature vectors and texture feature vectors through main component analysis; inputting the dimension-reduced feature vectors into an input layer of a nerve network model, and acquiring the population estimation of each sub-block through an output layer; adding to obtain the total population. The dense population estimation method and system have high accuracy and high robustness, and a good effect is achieved in the population counting experiment of subway station monitoring videos.
Owner:HARBIN INST OF TECH SHENZHEN GRADUATE SCHOOL

Face recognition method based on reference features

The invention discloses a face recognition method based on reference features. The method comprises the following steps that: scale invariant features and local binary pattern features of a face image to be recognized are extracted; a principal component analysis method is utilized for dimensionality reduction to obtain the image features of the face image to be recognized; the similarity of the image features to a cluster center is calculated by utilizing the obtained image features to obtain the reference features of the face image to be recognized; and the similarity of the reference features of the face image to be recognized and the reference features of training data concentration is calculated to obtain an analysis result. The reference features of the face image provided by the invention comprise texture information and structure information of the face image, so that the method provided by the invention can more comprehensively represent the face compared with the method in the prior art, which only represents the texture information or the structure information of the face. The process of feature extraction is simple and easy to realize; the recognition result is highly precise; high recognition rate of different facial gestures of the same person is realized.
Owner:HUAZHONG UNIV OF SCI & TECH

Binocular vision depth feature and apparent feature-combined face living body detection method

The present invention provides a binocular vision depth feature and apparent feature-combined face living body detection method. The method includes the following steps that: step 1, a binocular vision system is established; step 2, a face is detected through the binocular vision system, so that a plurality of key points can be obtained; step 3, a binocular vision depth feature and a classification score corresponding to the binocular vision depth feature are obtained; step 4, a complete face area is intercepted from a left image, the complete face area is normalized to a fixed size, and a local binary pattern (LBP) feature is extracted so as to be adopted as an apparent feature descriptor; step 5, a face living body detection score corresponding to a micro-texture feature is obtained; and step 6, the classification score corresponding to the binocular vision depth feature obtained in the step 3 and the face living body detection score corresponding to the micro-texture feature obtained in the step 5 are fused in a decision-making layer, so that whether an image to be detected is a live body can be judged. The binocular vision depth feature and apparent feature-combined face living body detection method of the invention has the advantages of simple algorithm, high operation speed, high precision and the like. With the method adopted, a new and reliable method can be provided for living body face detection.
Owner:SHANGHAI JIAO TONG UNIV

RGB (Red Green Blue) and IR (Infrared) binocular camera-based living body detecting method and device

The invention relates to an RGB (Red Green Blue) and IR (Infrared) binocular camera-based living body detecting method and device. The method comprises the steps of obtaining two sets of video streamsthrough an RGB camera and an IR camera respectively; carrying out face detection and living body judgement on video frames in the two sets of video streams; and when both the two sets of video framesare judged as living bodies, regarding the face in the current video frame to be a living human face. The method disclosed by the invention specifically comprises the steps of collecting face videosby adopting the two cameras and carrying out face detection to respectively obtain human faces under RGB and IR; aiming at an RGB colorful face image, extracting LBP (Local Binary Pattern) features byutilizing a traditional image processing algorithm and judging the face is a living human face or not through SVM (Support Vector Machine) classification; meanwhile, aiming at an IR face image, directly entering a trained CNN (convolutional neural network) to carry out classification and judge whether the face is a living human face or not; and if both the faces are judged as living human faces,eventually judging the face is a living face. The method disclosed by the invention has the beneficial effects of high robustness, low cost and convenience for large-scale use.
Owner:深圳神目信息技术有限公司

Method and system for detecting pedestrian in front of vehicle

The invention discloses a method and a system for detecting the pedestrian in front of a vehicle. The method comprises the steps of image acquisition and preprocessing, image scaling, LBP (Local Binary Pattern) and HOG (Histogram of Oriented Gradient) feature extraction, region of interest extraction, target identification, and target fusion and early warning. A driver is reminded timely at the presence of the pedestrian in front of the vehicle. The system for detecting the pedestrian in front of the vehicle comprises three portions which are an image acquisition unit, an SOPC (System on Programmable Chip) unit and an ASIC (Application Specific Integrated Circuit) unit, wherein the image acquisition unit is a camera unit, the SOPC unit comprises an image preprocessing unit, a region of interest extraction unit, a target identification unit, and a target fusion and early warning unit, and the ASIC unit comprises an image scaling unit, an LBP feature extraction unit and an HOG feature extraction unit. According to the invention, LBP features and HOG features are used in a joint manner, and two-level detection improves the accuracy of pedestrian detection on the whole; and HOG feature extraction is dynamically adjusted according to classification conditions of an LBP based SVM (Support Vector Machine), the calculation amount is reduced, the calculation speed is improved, and the driving safety of the vehicle is improved.
Owner:SHANGHAI UNIV

Face identification method based on wavelet multi-scale analysis and local binary pattern

The invention, which relates to the technical fields including pattern identification, image processing and computer vision and the like, provides a face identification method based on a wavelet multi-scale analysis and a local binary pattern (LTP). The method comprises the following steps: selecting an appropriate face image; carrying out a multi-scale wavelet analysis on a training image to obtain a first-level low frequency approximation image and a second-level low frequency approximation image; utilizing an LTP algorithm to carry out conversion on the low frequency approximation images to obtain LTP characteristic values of all pixel points; carrying out statistics on LTP histograms of the images by utilizing a blocking method and connecting the blocked histograms of the images of the two levels to obtain characteristic vector representation of the face image; and for a to-be-identified face, obtaining a characteristic vector of the to-be-identified face image and then using X <2> probability statistics to complete face identification. According to the method provided by the invention, an image noise effect can be effectively reduced; extraction capability for image texture characteristics can be enhanced; besides, the method has advantages of good robustness, high identification rate, fast calculating speed and important practical value.
Owner:SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products