Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

46 results about "Emotion perception" patented technology

Emotion perception refers to the capacities and abilities of recognizing and identifying emotions in others, in addition to biological and physiological processes involved. Emotions are typically viewed as having three components: subjective experience, physical changes, and cognitive appraisal; emotion perception is the ability to make accurate decisions about another's subjective experience by interpreting their physical changes through sensory systems responsible for converting these observed changes into mental representations. The ability to perceive emotion is believed to be both innate and subject to environmental influence and is also a critical component in social interactions. How emotion is experienced and interpreted depends on how it is perceived. Likewise, how emotion is perceived is dependent on past experiences and interpretations. Emotion can be accurately perceived in humans. Emotions can be perceived visually, audibly, through smell and also through bodily sensations and this process is believed to be different from the perception of non-emotional material.

Hotel service robot system based on cloud voice communication and emotion perception

The invention relates to the technical field of a hotel service robot system based on cloud voice communication and emotion perception. The hotel service robot system includes a cloud voice recognition module, a semantic understanding module, a voice playing module, a facial expression recognition module, a face recognition module, a display module, an action recognition module, an emotion recognition module, a check-in/check-out module, a print material module, a payment module, a surveillance video system module, an identity card scanning module and the like. The above modules are integratedin a hotel service robot, the hotel service robot judges whether there is a customer check-in and check-out service by using a face recognition technology, and handles the service by voice communication with a customer, and meanwhile, the hotel service robot has functions of printing of related materials, payment, identity scanning and the like. The hotel service robot judges customer emotion through three aspects of information of human facial expression, voice and action to properly adjust emotional scenes for communication with the customer, and the hotel service robot can call a hotel monitoring system to perform face recognition in order to remind customers whose face images are not collected and recorded to collect and record the face images.
Owner:SOUTH CHINA UNIV OF TECH +1

Internet-of-things emotion perception technology-based internet-of-things social application system

The invention discloses an internet-of-things emotion perception technology-based internet-of-things social application system. The system comprises a hardware acquisition module and a social APP, wherein the hardware acquisition module is used for measuring and recording heart rate and shell temperature data of users in real time and sending the data to a mobile terminal; the social APP is installed on the mobile terminal; and the social APP has a basic chat function, is internally provided with an emotion perception algorithm, and is used for receiving the data uploaded by the hardware acquisition module in real time after being correctly matched with the hardware acquisition module, analyzing emotion of a user using the current hardware, and transferring emotion perception information through contour pattern change or vibration so as to realize emotion perception and interaction. Aiming at the current situation that the current internet-of-things social application software is lack of scientific measurement and real-time emotion change of interaction users, the method fuses an internet-of-things emotion perception technology into internet social applications, so that the dead zone that the current internet social applications ignore the user emotion interaction is filled, the interaction experience in internet social contact is increased and the social contact quality is enriched.
Owner:ZHEJIANG UNIV

A vision-based side face posture resolving method and an emotion perception autonomous service robot

The invention discloses a vision-based side face posture resolving method and an emotion perception autonomous service robot, wherein the side face posture resolving method is used for acquiring an attitude angle of a face image detected in a current state relative to a face in a front face state in real time, and comprises the following specific steps of constructing an attitude change model of aface area; constructing a shearing angle model of the face; and constructing an attitude angle resolving model based on shear angle elimination. The emotion perception autonomous service robot comprises an acquisition module which is provided with the side face posture resolving method in advance; a navigation module used for controlling the robot to move to a place over against the human face tocollect a front face image of the human face according to the human face attitude angle obtained by the collection module; and a face recognition and emotion sensing module used for finally realizingthe identity recognition and emotion detection. According to the vision-based side face posture resolving method and the emotion perception autonomous service robot provided by the invention, the problem that the identity and emotion of a person cannot be identified during side face is solved.
Owner:INST OF ELECTRONICS CHINESE ACAD OF SCI

Method for speech emotion recognition by utilizing emotion perception spectrum characteristics

The invention relates to a method for speech emotion recognition by utilizing emotion perception spectrum characteristics. Firstly, a pre-emphasis method is used to perform high-frequency enhancementon an input speech signal, and then, the signal is converted into frequency field by using fast Fourier transform to obtain a speech frequency signal; the speech frequency signal is divided into a plurality of sub-bands by using an emotion perception sub-band dividing method; emotion perception spectrum characteristic calculation is carried out on each sub-band, wherein the spectrum characteristics comprises an emotion entropy characteristic, an emotional spectrum harmonic inclination degree and an emotional spectrum harmonic flatness; global statistical characteristic calculation is carried out on the spectrum characteristics to obtain a global emotion perception spectrum characteristic vector; finally, the emotion perceptual spectrum characteristic vector is input to an SVM classifier toobtain the emotion category of the speech signal. According to the speech psychoacoustic model principle, the perceptual sub-band dividing method is used to accurately describe the emotion state information, and emotion recognition is carried out through the sub-band spectrum characteristics, and the recognition rate is improved by 10.4% compared with prior MFCC characteristics.
Owner:湖南商学院 +1

A method for pushing emotional regulation services

The invention discloses an emotion regulation business pushing method and a wearable collaborative pushing system. In view of capability characteristics of wearable terminals, by utilizing a plurality of wearable terminals of different monitoring types, multi-dimensional and multi-contact physiological data is acquired and periodically sent to a handheld terminal; and the handheld terminal performs dimension reduction processing on the acquired multi-dimensional physiological data by virtue of a data mining technology, a multi-dimensional physiological eigenvector is designed, a state level of user emotion is analyzed, and a corresponding emotion regulation business is pushed according to the state level. According to the scheme, an application direction of the physiological data acquired by the wearable terminals is reconstructed through interactive cooperation of the handheld terminal and the wearable terminals, and the physiological data is applied to emotion perception of users, so that the problem of low accuracy of a conventional emotion perception technology is solved, the system flexibility is greatly improved in view of the characteristic of good mobility of a wearable device, and the business experience of the users is enhanced.
Owner:西安慧脑智能科技有限公司

Method for establishing emotion perception model based on individual face analysis in video

The invention provides a method for establishing an emotion perception model based on individual face analysis in a video, comprising the following steps: evaluating the emotional states of a plurality of tested individuals through a positive and negative emotion scale, and respectively obtaining positive emotion and negative emotion scores corresponding to the emotional states of the tested individuals; collecting tested face video data, wherein the tested face video data corresponds to the tested emotional state score; performing data denoising preprocessing on the face key points in the acquired video data in a two-dimensional space; selecting facial feature points representing emotion perception by calculating a difference measurement variance between adjacent frames; carrying out feature extraction, feature dimension reduction and feature selection on the facial feature points, and optimizing a feature set through a classifier by adopting a sequence backward selection (SBS) algorithm; and adopting a regression algorithm in machine learning and taking the obtained scores of the individual positive emotion and negative emotion as annotation data to perform model training and verification, thereby obtaining and storing a prediction model for the individual positive emotion and negative emotion. According to the method, the user does not need to report by himself/herself, thetimeliness is high, and the correlation coefficient of the model prediction score and the scale evaluation score can reach the medium-to-strong correlation level.
Owner:INST OF PSYCHOLOGY CHINESE ACADEMY OF SCI

Speaker emotion perception method fusing multi-dimensional information

The invention discloses a speaker emotion perception method fusing multi-dimensional information, and relates to the technical field of deep learning and human emotion perception. The method includes: inputting a video of a speaker, and extracting an image and voice of the speaker from the video; inputting the image and the voice of the speaker into a multi-dimensional feature extraction network, extracting a language text and a language emotion in the voice, and extracting facial expression features of the speaker from image information; utilizing a multi-dimensional feature coding algorithm for coding various feature results of the multi-dimensional feature extraction network, and mapping the multi-dimensional information to a shared coding space; fusing the features in the coding space from low dimension to high dimension by using a multi-dimensional feature fusion algorithm, and obtaining feature vectors of multi-dimensional information highly related to the emotion of the speaker in a high-dimensional feature space; and inputting the fused multi-dimensional information into an emotion perception network for prediction, and outputting emotion perception distribution of the speaker. By means of the method, the ambiguity can be effectively eliminated according to the multi-dimensional information, and the emotion perception distribution of the speaker can be accurately predicted.
Owner:XIAMEN UNIV

EEG-based emotional perception and stimulus sample selection method for adolescent environmental psychology

ActiveCN107080546BPerceived emotional intensityStrong emotional perceptionSensorsPsychotechnic devicesEmotion perceptionEmotional perception
The invention discloses a system and a method for sensing emotion of juvenile environmental psychology based on electroencephalogram, and a method for selecting simulation samples. The system comprises an electroencephalogram signal acquisition module, an electroencephalogram signal preprocessing module, an electroencephalogram signal feature extraction module, an emotion sensing module and the like; and a built environment serves as a visual simulation source, wherein juveniles often contact with, participate in or are crazy about the built environment. The emotion sensing method comprises the steps of selecting the visual simulation source, acquiring the electroencephalogram signal, preprocessing the electroencephalogram signal, extracting electroencephalogram signal features, training a model, determining emotion intensity and the like. According to the method for selecting the simulation samples, arousal dimensionality and valence dimensionality are divided according to 5 emotion intensities, and rectangular select boxes are determined non-equidistantly according to the actual selected condition of the samples and the distribution state of the samples in a two-dimensional space. The emotion sensing system, the emotion sensing method and the method for selecting the simulation samples have the advantages of high emotion sensing ability, object extending ability and the like, and have high application values in environmental psychology research.
Owner:安徽智趣小天使信息科技有限公司

A Method for Speech Emotion Recognition Using Emotion Perceptual Spectrum Features

The invention relates to a method for speech emotion recognition using emotion perception spectrum features. Firstly, a pre-emphasis method is used to perform high-frequency enhancement on the input speech signal, and then the fast Fourier transform is used to convert it to a frequency to obtain a speech frequency signal; and then for the speech frequency The signal is divided into multiple sub-bands by using the emotion-aware sub-band division method; the emotional-aware spectral features are calculated for each sub-band, and the spectral features include emotional entropy features, emotional spectrum harmonic inclinations, and emotional spectrum harmonic flatness; The spectral features are calculated by global statistical features to obtain the global emotion perception spectrum feature vector; finally, the emotion perception spectrum feature vector is input to the SVM classifier to obtain the emotion category of the speech signal. According to the principle of the speech psychoacoustic model, the invention adopts the perceptual sub-band division method to accurately describe the emotional state information, and performs emotional recognition through the sub-band spectral features, and improves the recognition rate by 10.4% compared with the traditional MFCC features.
Owner:湖南商学院 +1

Emotion adjustment method, device, computer equipment and storage medium

ActiveCN111430006AImprove accuracyImprove the efficiency of emotion regulationRecord information storageMental therapiesEmotion perceptionEmotional perception
The application relates to emotion adjustment method, device, computer equipment and a storage medium. The method includes obtaining the current emotional value and the expected emotional value of a user, wherein the current emotional value is obtained by calculating the physiological data of the user by using an emotional perception system. The music selection parameter is determined according tothe expected emotion value and the current emotion value, and the corresponding target music is selected from the preset music library according to the music selection parameter. Then the selected target music is played and the output emotion value of the user is calculated by the emotion perception system based on the played target music. When the difference between the output emotion value andthe expected emotion value is greater than the preset threshold, the output emotion value is used as the current emotion value for the next time. Then the method returns to step of obtaining the current emotional value and the expected emotional value of the user and repeats the step until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold. The method can increase the emotion adjustment efficiency.
Owner:SHENZHEN INST OF ARTIFICIAL INTELLIGENCE & ROBOTICS FOR SOC +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products