Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

596 results about "Facial expression recognition" patented technology

Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus

A facial expression recognition system that uses a face detection apparatus realizing efficient learning and high-speed detection processing based on ensemble learning when detecting an area representing a detection target and that is robust against shifts of face position included in images and capable of highly accurate expression recognition, and a learning method for the system, are provided. When learning data to be used by the face detection apparatus by Adaboost, processing to select high-performance weak hypotheses from all weak hypotheses, then generate new weak hypotheses from these high-performance weak hypotheses on the basis of statistical characteristics, and select one weak hypothesis having the highest discrimination performance from these weak hypotheses, is repeated to sequentially generate a weak hypothesis, and a final hypothesis is thus acquired. In detection, using an abort threshold value that has been learned in advance, whether provided data can be obviously judged as a non-face is determined every time one weak hypothesis outputs the result of discrimination. If it can be judged so, processing is aborted. A predetermined Gabor filter is selected from the detected face image by an Adaboost technique, and a support vector for only a feature quantity extracted by the selected filter is learned, thus performing expression recognition.
Owner:SAN DIEGO UNIV OF CALIFORNIA +1

Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus

A facial expression recognition system that uses a face detection apparatus realizing efficient learning and high-speed detection processing based on ensemble learning when detecting an area representing a detection target and that is robust against shifts of face position included in images and capable of highly accurate expression recognition, and a learning method for the system, are provided. When learning data to be used by the face detection apparatus by Adaboost, processing to select high-performance weak hypotheses from all weak hypotheses, then generate new weak hypotheses from these high-performance weak hypotheses on the basis of statistical characteristics, and select one weak hypothesis having the highest discrimination performance from these weak hypotheses, is repeated to sequentially generate a weak hypothesis, and a final hypothesis is thus acquired. In detection, using an abort threshold value that has been learned in advance, whether provided data can be obviously judged as a non-face is determined every time one weak hypothesis outputs the result of discrimination. If it can be judged so, processing is aborted. A predetermined Gabor filter is selected from the detected face image by an Adaboost technique, and a support vector for only a feature quantity extracted by the selected filter is learned, thus performing expression recognition.
Owner:SAN DIEGO UNIV OF CALIFORNIA +1

Household information acquisition and user emotion recognition equipment and working method thereof

The invention discloses a household information acquisition and user emotion recognition equipment, which comprises a shell, a power supply, a main controller, a microcontroller, multiple environmental sensors, a screen, a microphone, an audio, multiple health sensors, a pair of robot arms and a pair of cameras, wherein the microphone is arranged on the shell; the power supply, the main controller, the microcontroller, the environmental sensors, the audio and the pair of cameras are arranged symmetrically relative to the screen respectively on the left and right sides; the robot arms are arranged on the two sides of the shell; the main controller is in communication connection with the microcontroller, and is used for controlling the microcontroller to control the movements of the robot arms through motors of the robot arms; the power supply is connected with the main controller and the microcontroller, and is mainly used for providing energy for the main controller and the microcontroller. According to the household information acquisition and user emotion recognition equipment, the intelligent speech recognition technology, the speech synthesis technology and the facial expression recognition technology are integrated, thus the use of the household information acquisition and user emotion recognition equipment is more convenient, and the feedback is more reasonable.
Owner:HUAZHONG UNIV OF SCI & TECH

Improved CNN-based facial expression recognition method

The invention provides an improved CNN-based facial expression recognition method, and relates to the field of image classification and identification. The improved CNN-based facial expression recognition method comprises the following steps: s1, acquiring a facial expression image from a video stream by using a face detection alignment algorithm JDA algorithm integrating the face detection and alignment functions; s2, correcting the human face posture in a real environment by using the face according to the facial expression image obtained in the step s1, removing the background information irrelevant to the expression information and adopting the scale normalization; s3, training the convolutional neural network model to obtain and store an optimal network parameter before extracting feature of the normalized facial expression image obtained in the step s2; s4 loading a CNN model and the optimal network parameters obtained by s3 for the optimal network parameters obtained in the steps3, and performing feature extraction on the normalized facial expression images obtained in the step s2; s5, classifying and recognizing the facial expression features obtained in the step s4 by using an SVM classifier. The method has high robustness and good generalization performance.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

A face multi-area fusion expression recognition method based on depth learning

The invention discloses a face multi-area fusion expression recognition method based on depth learning, which comprises the following steps of detecting a face position with a detection model; obtaining the coordinates of the key points by using the key point model; aligning the eyes according to the key points of the eyes, then aligning the face according to the coordinates of the key points of the whole face, and clipping the face region by affine transformation; cutting the eye and mouth areas of the image to a certain proportion; dividing the convolution neural network into one backbone network and two branch networks; carrying out the feature fusion in the last convolution layer, and finally obtaining the expression classification results by the classifier. The method of the inventionutilizes the priori information, besides the whole face, the eyes and mouth regions are also used as the input of the network, and the network can learn the whole semantic features of facial expressions and the local features of facial expressions through model fusion, so that the method simplifies the difficulty of facial expression recognition, reduces the external noise, and has strong robustness, high accuracy, low complexity of the algorithm and so on.
Owner:SOUTH CHINA UNIV OF TECH

PCANet-CNN-based arbitrary attitude facial expression recognition method

The invention discloses a PCANet-CNN-based arbitrary attitude facial expression recognition method. The method comprises the following steps: firstly pre-processing the original images to obtain gray level facial images with uniform size, wherein the gray level facial images comprise front facial images and side facial images; inputting the front face images into an unsupervised characteristic learning model PCANet and learning to obtain characteristics corresponding to the front facial images; inputting the side facial images into a supervised characteristic learning model CNN, and training by taking the front facial characteristics obtained through the unsupervised characteristic learning as labels so as to obtain a mapping relationship between the front facial characteristics and the side facial characteristics; and obtaining uniform front facial characteristics corresponding to the facial images at arbitrary attitudes through the mapping relationship, and finally sending the uniform front facial characteristics into SVM to train so as to obtain a uniform recognition model in allusion to arbitrary attitudes. According to the method provided by the invention, the problem of low model recognition rate caused by the condition of respectively modeling for each attitude in the traditional multi-attitude facial expression recognition and the factors such as attitude and the like is solved, and the correctness of the multi-attitude facial image expression recognition can be effectively improved.
Owner:JIANGSU UNIV

Teaching system based on internet, facial expression recognition and speech recognition and realizing method of teaching system

The invention discloses a teaching system based on an internet, facial expression recognition and speech recognition and a realizing method of the teaching system. The realizing method comprises the following steps: S1, playing contents of teaching courses by a first terminal; S2, acquiring video data information, speech data information of a user and user operation in the playing process; S3, transmitting the information and the user operation to a main control processor; S4, extracting facial features and pronunciation features of the user into an analysis processor by the main control processor; S5, respectively comparing the facial features and the pronunciation features with standard templates by the analysis processor; S6, dynamically adjusting the played contents of the teaching courses or/and teaching procedures by the main control processor according to current operation of the user and feedback of the analysis processor, or transmitting a comparison result to a second terminal by a cloud platform in real time. The teaching system disclosed by the invention is teaching software with the characteristics of mobility, entertainment, sociability and the like and can provide a chance for students to study independently outside class at any time and any place; by means of an online teaching mode for assisting real persons, a traditional mode for teaching Chinese as a foreign language is improved.
Owner:深圳极速汉语网络教育有限公司

Mobile terminal and method for automatically switching wall paper based on facial expression recognition

The invention provides a mobile terminal for automatically switching wall paper based on facial expression recognition. The mobile terminal comprises a storage module, a collection module, a comparison module and a wall paper switching module, wherein the storage module is used for storing the wall paper, an initial facial expression and a corresponding relationship between the wall paper and the initial facial expression after classification; the collection module is used for collecting a current facial expression; the comparison module is connected with the storage module and the collection module and used for comparing the current facial expression with the initial facial expression; and the wall paper switching module is connected with the storage module and the comparison module and used for switching the wall paper according to a comparison result. Compared with the prior art, the mobile terminal can be used for automatically switching current wall paper to the wall paper conforming to current mood of a user by detecting and recognizing the facial expression of the user so as to improve the current mood of the user. The invention provides a method for automatically switching the wall paper based on the facial expression recognition at the same time.
Owner:GUANGDONG OPPO MOBILE TELECOMM CORP LTD

Facial expression recognition method, convolutional neural network model training method, devices and electronic apparatus

The embodiments of the present invention provide a facial expression recognition method, a convolutional neural network model training method, a facial expression recognition device, a convolutional neural network model training device and an electronic apparatus. The facial expression recognition method includes the following steps that: facial expression features are extracted from a face imageto be detected by means of the convolutional layer portion of a convolutional neural network model and acquired face key points in the face image to be detected, so that a facial expression feature image is obtained; ROI (regions of interest) corresponding to the face key points in the facial expression feature image are determined; pooling processing is performed on the determined ROIs through adopting the pooling layer of the convolutional neural network model, so that a pooled ROI feature image can be obtained; and the facial expression recognition result of the face image is obtained at least according to the ROI feature map. With the facial expression recognition method provided by the embodiments of the present invention adopted, subtle facial expression changes can be effectively captured, and at the same time, differences caused by different facial gestures can be better processed; and the detailed information of the changes of the plurality of regions of a face are fully utilized, so that subtle facial expression changes and faces in different postures can be recognized more accurately.
Owner:SENSETIME GRP LTD

A multi-view facial expression recognition method based on mobile terminal

The invention discloses a multi-view facial expression recognition method based on a mobile terminal, which comprises the steps of cutting out a face region from each image, and carrying out data enhancement to obtain a facial expression recognition method used for training data set of AA-MDNet model; the multi-attitude data set is obtained by GAN model extension, and the multi-attitude data set is obtained by GAN model extension. Using ADN multi-scale clipping method to clip; enter the cropped image into AA-MDNet model, The input image extracts features from DenseNet, a densely connected subnetwork, Then, based on the extracted features, an attention adaptive network (ADN) is used to obtain the position parameters of the attention area of the expression and posture, and the image of the area is scaled from the input image according to the position parameters, which is used as the input of the next scale. Learning the multi-scale high-level feature fusion, we can get the high-level features with global and local fusion features. Finally, we can classify the facial posture and expression categories. The invention has very important significance in the fields of human-computer interaction, face recognition, computer vision and the like.
Owner:CHINA UNIV OF GEOSCIENCES (WUHAN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products