Visual speech multi-mode collaborative analysis method based on emotion context and system

A sentiment analysis and context technology, applied in the field of emotion recognition, can solve problems such as ignoring the context of the analysis object, not considering the real scene, and the decline in recognition accuracy

Active Publication Date: 2013-05-29
JIANGSU UNIV
View PDF2 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing emotion recognition research still has many limitations. The research results mainly focus on single-channel emotion analysis. Existing emotion analysis research on multi-channel fusion also mainly focuses on two channels of expression and voice. The research objects are limited. It is within the range of a few perfor

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual speech multi-mode collaborative analysis method based on emotion context and system
  • Visual speech multi-mode collaborative analysis method based on emotion context and system
  • Visual speech multi-mode collaborative analysis method based on emotion context and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] The present invention will be described in detail below in conjunction with various embodiments shown in the drawings. However, these embodiments do not limit the present invention, and any structural, method, or functional changes made by those skilled in the art according to these embodiments are included in the protection scope of the present invention.

[0049] ginseng figure 1 , figure 2 As shown, the emotional context-based visual speech multimodal collaborative emotion analysis method of the present invention is characterized in that the method includes:

[0050] S1. Dynamically extract and analyze the emotional context information based on the situation and the analysis object in the visual-voice scene. The emotional context information includes the prior emotional context information and spatio-temporal context information (Spatio-temporal context) contained in the visual-voice scene, wherein the prior Emotional context information includes environmental con...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual speech multi-mode collaborative analysis method based on an emotion context and a system. The method includes (S1) dynamically extracting and analyzing emotion context information based on a situation and an analysis object in a visual speech situation; (S2) extracting visual emotion characteristics of the analysis object in the visual situation and speech emotion characteristics of the analysis object in a speech situation in real time; (S3) carrying out structural sparse representation for the emotion context information, posture characteristics, expression characteristics and the speech emotion characteristics; and (S4) carrying out collaborative analysis and identification for the multi-mode emotion information by means of sentiment classification agents. Due to the facts that the emotion context information, the posture characteristics, the speech emotion characteristics and the expression characteristics include large amounts of emotion information, the emotion information is complementary, and the structural sparse representation and collaborative analysis through multiple sentiment agents are combined, emotion of a to-be-analyzed person is precisely analyzed on the condition that partial channel information is lost, and precision and robustness of emotion analysis in the natural interactive environment can be improved.

Description

technical field [0001] The present invention relates to the technical field of emotion recognition, in particular to a method and system for emotional context-based multimodal collaborative emotion analysis of visual and voice. Background technique [0002] With the development of multimedia technology, research on emotion analysis and recognition based on audio and video is of great significance for enhancing the intelligence and humanization of computers, developing new human-machine environments, and promoting the development of multimedia technology and signal processing and other related fields. Existing emotion recognition research still has many limitations. The research results mainly focus on single-channel emotion analysis. Existing emotion analysis research on multi-channel fusion also mainly focuses on two channels of expression and voice. The research objects are limited. It is within the range of a few performers in the laboratory, and does not consider the rea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/27
Inventor 毛启容赵小蕾詹永照白李娟胡素黎董俊健
Owner JIANGSU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products