Multimodal continuous emotion recognition method, service reasoning method and system

A technology for emotion recognition and speech emotion recognition, applied in the field of service robots, can solve the problems of poor robustness, scarce data sets, and low recognition accuracy, and achieve the effect of improving satisfaction, improving accuracy, and improving accuracy.

Active Publication Date: 2022-06-24
SHANDONG UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] According to the inventor's understanding, the current recognition of human emotions is mainly based on discrete emotion models, but since the expression of human emotions is a complex and continuous process, it is difficult for discrete emotion models to fully express the user's emotional state. Necessary, at the same time, because the continuous emotional state calibration is complex, the data set is scarce, and the single-modal continuous emotion recognition has the disadvantages of low recognition accuracy and poor robustness, so in order to further reduce the impact of the scarcity of data sets and improve the accuracy of emotion recognition In order to enhance the robustness of the recognition system, it is necessary to explore the complementarity between the various modalities and realize the emotion recognition of multi-modal fusion, so as to improve the quality of the final emotion recognition.
[0005] The service target of home service robots is people. At present, the services provided by service robots rarely take into account the current emotional state of users, and the reasoning rules are hard reasoning, which does not take into account that the home environment is dynamically changing and full of various uncertainties. Factors, the inferred service results cannot serve users well, and cannot reflect the intelligence of service robots

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multimodal continuous emotion recognition method, service reasoning method and system
  • Multimodal continuous emotion recognition method, service reasoning method and system
  • Multimodal continuous emotion recognition method, service reasoning method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0055]This embodiment discloses a multi-modal continuous emotion recognition method based on expressions and speech, such as figure 1 shown, including the following steps:

[0056] Step 1: Obtain video data containing the user's facial expressions and voice;

[0057] In this example, experimental verification is performed on the AVEC2013 dataset. The AVEC2013 database is an open dataset provided by the third audiovisual emotion challenge competition, which not only contains facial expression and speech emotion data, but also has features such as figure 2 Sentiment labels for two continuous dimensions, Arousal and Valence, are shown.

[0058] Step 2: Based on the pre-trained face recognition model, extract face images and perform emotion recognition; specifically include:

[0059] Step 2.1: Use the convolutional neural network based on cascade architecture to realize face detection and discard abnormal frames in expression video frames, and extract face images;

[0060] Fi...

Embodiment 2

[0107] The purpose of this embodiment is to provide a multimodal continuous emotion recognition system based on facial expressions and speech, including:

[0108] a data acquisition module, configured to acquire video data containing the user's facial expressions and voice;

[0109] The facial expression and emotion recognition module is configured to extract a face image for the video image sequence, and perform feature extraction on the facial image to obtain the facial expression emotion feature; according to the facial expression emotion feature, perform continuous emotion recognition based on a pre-trained deep learning model;

[0110] The speech emotion recognition module is configured to obtain speech emotion features by using Mel frequency cepstral coefficients for speech data; according to the speech emotion features, continuous emotion recognition is performed based on a pre-trained transfer learning network;

[0111] The data fusion module is configured to fuse the ...

Embodiment 3

[0116] The purpose of this embodiment is to provide a computer-readable storage medium.

[0117] A computer-readable storage medium having a computer program stored thereon, the program implementing the method described in Embodiment 1 when the program is executed by a processor.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses that the invention provides a multi-modal continuous emotion recognition method, a service reasoning method and a system. The method includes: acquiring video data including user facial expressions and voice; for video image sequences, extracting human face images, performing feature extraction on the human face images to obtain expression and emotion features; performing continuous emotion recognition according to the expression and emotion features; Data, to obtain voice emotion features; according to voice emotion features, continuous emotion recognition is performed, and the results of expression emotion recognition and voice emotion recognition are fused together, which overcomes the shortcomings of single-modal continuous emotion recognition and improves the accuracy of emotion recognition; On this basis, service reasoning is performed based on the multi-entity Bayesian network model, so that the service robot can make dynamic adjustments to the service according to user emotions.

Description

technical field [0001] The invention belongs to the technical field of service robots, and in particular relates to a multimodal continuous emotion recognition method, a service reasoning method and a system. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] As service robots play an important role in home scenarios, natural human-robot interaction has become one of the key factors affecting user satisfaction and human-robot coexistence comfort. The goal of home service robots is to have cognitive ability to users' emotions and provide high-quality services according to the user's emotional state. [0004] As far as the inventors know, the current recognition of human emotions is mainly discrete emotion models. However, since the expression of human emotions is a complex and continuous process, it is difficult for discrete emotion models t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/16G06V10/44G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08G06N5/04G06N7/00G10L25/24G10L25/30G10L25/63
CPCG06N3/08G06N5/04G10L25/24G10L25/63G10L25/30G06V40/174G06V40/172G06V40/168G06V10/44G06N3/047G06N7/01G06N3/048G06N3/045G06F18/24155G06F18/241G06F18/254
Inventor 路飞张龙
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products