Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1090 results about "Emotion recognition" patented technology

Emotion recognition is the process of identifying human emotion, most typically from facial expressions as well as from verbal expressions. This is both something that humans do automatically but computational methodologies have also been developed.

Intelligent gender and emotion recognition detection system and method based on vision and voice

The invention discloses an intelligent gender and emotion recognition detection system and method based on vision and voice. The system comprises an image-based emotion and gender recognition module, a voice-based emotion and gender recognition module, a fusion module and a personalized intelligent voice interaction system, wherein the image-based emotion and gender recognition module is used for recognizing the emotion of a person in a vehicle according to a face image and recognizing the gender of the person in the vehicle according to the face; the voice-based emotion and gender recognition module is used for recognizing the emotion and the gender of the person in the vehicle according to voice; the fusion module is used for matching the gender recognition results, fusing the emotion recognition result, and sending the results to the personalized intelligent voice interaction system; and the personalized intelligent voice interaction system is used for voice interaction. By adopting the system and the method, the gender/ emotion recognition accuracy is improved via fused images and voice recognition results; the driving experience and the driving safety are improved via the personalized intelligent voice interaction system; and the using fun of vehicle-mounted equipment and the accuracy of information service are improved by voice interaction.
Owner:BEIJING ILEJA TECH CO LTD

Household information acquisition and user emotion recognition equipment and working method thereof

The invention discloses a household information acquisition and user emotion recognition equipment, which comprises a shell, a power supply, a main controller, a microcontroller, multiple environmental sensors, a screen, a microphone, an audio, multiple health sensors, a pair of robot arms and a pair of cameras, wherein the microphone is arranged on the shell; the power supply, the main controller, the microcontroller, the environmental sensors, the audio and the pair of cameras are arranged symmetrically relative to the screen respectively on the left and right sides; the robot arms are arranged on the two sides of the shell; the main controller is in communication connection with the microcontroller, and is used for controlling the microcontroller to control the movements of the robot arms through motors of the robot arms; the power supply is connected with the main controller and the microcontroller, and is mainly used for providing energy for the main controller and the microcontroller. According to the household information acquisition and user emotion recognition equipment, the intelligent speech recognition technology, the speech synthesis technology and the facial expression recognition technology are integrated, thus the use of the household information acquisition and user emotion recognition equipment is more convenient, and the feedback is more reasonable.
Owner:HUAZHONG UNIV OF SCI & TECH

Virtual learning environment natural interaction method based on multimode emotion recognition

The invention provides a virtual learning environment natural interaction method based on multimode emotion recognition. The method comprises the steps that expression information, posture information and voice information representing the learning state of a student are acquired, and multimode emotion features based on a color image, deep information, a voice signal and skeleton information are constructed; facial detection, preprocessing and feature extraction are performed on the color image and a depth image, and a support vector machine (SVM) and an AdaBoost method are combined to perform facial expression classification; preprocessing and emotion feature extraction are performed on voice emotion information, and a hidden Markov model is utilized to recognize a voice emotion; regularization processing is performed on the skeleton information to obtain human body posture representation vectors, and a multi-class support vector machine (SVM) is used for performing posture emotion classification; and a quadrature rule fusion algorithm is constructed for recognition results of the three emotions to perform fusion on a decision-making layer, and emotion performance such as the expression, voice and posture of a virtual intelligent body is generated according to the fusion result.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Emotional music recommendation method based on brain-computer interaction

The invention discloses an emotional music recommendation method based on brain-computer interaction. Music corresponding to emotions is automatically searched and recommended to a user by acquiring electroencephalogram signals of the user. The process includes the steps: firstly, extracting the EEG (electroencephalogram) signals of the user by an electroencephalogram acquisition instrument, performing wavelet decomposition on the EEG signals into four wave bands alpha, beta, gamma and delta, taking frequency band energy of the four wave bands as a feature, recognizing the emotions by a trained electroencephalogram emotion recognition model EMSVM, and judging emotion categories corresponding to the EEG signals; averagely decomposing external music signals into eight frequency bands within the range of 20Hz-20kHz, taking energy values of the eight frequency bands as characteristic values, recognizing music emotions by a trained music emotion recognition model MMSVM and building a music emotion database MMD; recommending the music corresponding to index numbers to the user according to the emotion categories of the electroencephalogram signals, and implementing an emotion-based music recommendation system. By the emotional music recommendation method, a new approach can be brought for infant music cultivation, sleep treatment and music search.
Owner:NANJING NORMAL UNIVERSITY

Chinese text emotion recognition method

The invention discloses a Chinese text emotion recognition method which includes the steps of (1) respectively building a commendatory-derogatory-term dictionary, a degree-term dictionary and a privative-term dictionary, (2) carrying out term-segmentation processing on sentences of a Chinese text to be processed, and obtaining dependence relationships and term frequency of terms, (3) selecting subject terms according to the term frequency, and signing the sentences containing the subject terms as subject sentences, (4) judging whether the terms in the subject sentences exit in the commendatory-derogatory-term dictionary, determining emotion initial values of the terms, determining modifying degree terms and privative terms of the terms according to the dependence relationships of the terms, then determining the weights of the terms according to values of the modifying degree terms in the degree-term dictionary, determining polarities according to the number of the privative terms, obtaining the emotion values of the terms, then summing the emotion values of all the terms of the subject sentences, and obtaining the emotion values of the subject sentences, and (5) summing the emotion values of all the sentences in the text, and obtaining the emotion state of the text. According to the Chinese text emotion recognition method, the emotion recognition accuracy rate of the text is greatly improved.
Owner:COMP NETWORK INFORMATION CENT CHINESE ACADEMY OF SCI

Man-machine interaction method and system for online education based on artificial intelligence

PendingCN107958433ASolve the problem of poor learning effectData processing applicationsSpeech recognitionPersonalizationOnline learning
The invention discloses a man-machine interaction method and system for online education based on artificial intelligence, and relates to the digitalized visual and acoustic technology in the field ofelectronic information. The system comprises a subsystem which can recognize the emotion of an audience and an intelligent session subsystem. Particularly, the two subsystems are combined with an online education system, thereby achieving the better presentation of the personalized teaching contents for the audience. The system starts from the improvement of the man-machine interaction vividnessof the online education. The emotion recognition subsystem judges the learning state of a user through the expression of the user when the user watches a video, and then the intelligent session subsystem carries out the machine Q&A interaction. The emotion recognition subsystem finally classifies the emotions of the audiences into seven types: angry, aversion, fear, sadness, surprise, neutrality,and happiness. The intelligent session subsystem will adjust the corresponding course content according to different emotions, and carry out the machine Q&A interaction, thereby achieving a purpose ofenabling the teacher-student interaction and feedback in the conventional class to be presented in an online mode, and enabling the online class to be more personalized.
Owner:JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products