Multi-modal continuous emotion recognition method, service reasoning method and system

An emotion recognition and multi-modal technology, applied in the field of service robots, can solve problems such as poor robustness, scarce data sets, and low recognition accuracy, and achieve the effect of improving satisfaction, accuracy, and accuracy

Active Publication Date: 2021-06-25
SHANDONG UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] According to the inventor's understanding, the current recognition of human emotions is mainly based on discrete emotion models, but since the expression of human emotions is a complex and continuous process, it is difficult for discrete emotion models to fully express the user's emotional state. Necessary, at the same time, because the continuous emotional state calibration is complex, the data set is scarce, and the single-modal continuous emotion recognition has the disadvantages of low recognition accuracy and poor robustness, so in order to further reduce the impact of the scarcity of data sets and improve the accuracy of emotion recognition In order to enhance the robustness of the recognition system, it is necessary to explore the complementarity between the various modalities and realize the emotion recognition of multi-modal fusion, so as to improve the quality of the final emotion recognition.
[0005] The service target of home service robots is people. At present, the services provided by service robots rarely take into account the current emotional state of users, and the reasoning rules are hard reasoning, which does not take into account that the home environment is dynamically changing and full of various uncertainties. Factors, the inferred service results cannot serve users well, and cannot reflect the intelligence of service robots

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal continuous emotion recognition method, service reasoning method and system
  • Multi-modal continuous emotion recognition method, service reasoning method and system
  • Multi-modal continuous emotion recognition method, service reasoning method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0055]This embodiment discloses a multimodal continuous emotion recognition method based on expression and voice, such as figure 1 shown, including the following steps:

[0056] Step 1: Obtain video data including user facial expressions and voice;

[0057] This embodiment chooses to conduct experimental verification on the AVEC2013 data set. The AVEC2013 database is a public data set provided by the third audio-visual emotion challenge competition, which not only contains facial expression and voice emotion data, but also has such figure 2 Sentiment labels for the two continuous dimensions of Arousal and Valence shown.

[0058] Step 2: Based on the pre-trained face recognition model, extract face images for emotion recognition; specifically include:

[0059] Step 2.1: Use the convolutional neural network based on the cascaded architecture to realize face detection and discard abnormal frames in the expression video frame, and extract the face image;

[0060] First, by co...

Embodiment 2

[0107] The purpose of this embodiment is to provide a multimodal continuous emotion recognition system based on expression and voice, including:

[0108] A data acquisition module configured to acquire video data including user facial expressions and voice;

[0109] The expression and emotion recognition module is configured to extract face images from the video image sequence, perform feature extraction on the face images, and obtain expression and emotion features; perform continuous emotion recognition based on the pre-trained deep learning model according to the expression and emotion features;

[0110] The speech emotion recognition module is configured to use Mel-frequency cepstral coefficients to obtain speech emotion features for the speech data; perform continuous emotion recognition based on the pre-trained transfer learning network according to the speech emotion features;

[0111] The data fusion module is configured to fuse expression emotion recognition results a...

Embodiment 3

[0116] The purpose of this embodiment is to provide a computer-readable storage medium.

[0117] A computer-readable storage medium stores a computer program thereon, and when the program is executed by a processor, the method described in Embodiment 1 is implemented.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal continuous emotion recognition method and a service reasoning method and system. The method comprises the following steps: acquiring video data containing facial expressions and voices of a user; for the video image sequence, extracting face images, carrying out feature extraction on the face images, and obtaining expression emotion features; performing continuous emotion recognition according to the expression emotion features; for the voice data, acquiring voice emotion features; According to the method, continuous emotion recognition is carried out according to voice emotion features, and an expression emotion recognition result and a voice emotion recognition result are fused, so that the defects of a single mode in continuous emotion recognition are overcome, and the emotion recognition precision is improved; on the basis, service reasoning is carried out based on a multi-entity Bayesian network model, so that the service robot can dynamically adjust the service according to the emotion of the user.

Description

technical field [0001] The invention belongs to the technical field of service robots, and in particular relates to a multimodal continuous emotion recognition method, a service reasoning method and a system. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] As service robots play an important role in household scenarios, natural human-computer interaction becomes one of the key factors affecting user satisfaction and the comfort of human-machine coexistence. The goal of the home service robot is to have the cognitive ability of the user's emotion and provide high-quality services according to the user's emotional state. [0004] According to the inventor's understanding, the current recognition of human emotions is mainly based on discrete emotion models, but since the expression of human emotions is a complex and continuous process, it i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08G06N5/04G06N7/00G10L25/24G10L25/30G10L25/63
CPCG06N3/08G06N5/04G10L25/24G10L25/63G10L25/30G06V40/174G06V40/172G06V40/168G06V10/44G06N3/047G06N7/01G06N3/048G06N3/045G06F18/24155G06F18/241G06F18/254
Inventor 路飞张龙
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products