A dual-modal video emotion recognition method based on composite spatio-temporal features

A technology of spatio-temporal feature and emotion recognition, applied in the field of pattern recognition, can solve the problem of VLBP feature performance degradation

Active Publication Date: 2019-05-31
HEFEI UNIV OF TECH
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These factors will greatly reduce the performance of VLBP features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A dual-modal video emotion recognition method based on composite spatio-temporal features
  • A dual-modal video emotion recognition method based on composite spatio-temporal features
  • A dual-modal video emotion recognition method based on composite spatio-temporal features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0180] In order to verify the effectiveness of the present invention, the only public bimodal database is used in the experiment: FABO expression and posture bimodal database. Since the database itself is not fully marked, the present invention selects 12 people with a large number of samples and relatively uniform emotional categories to carry out related experiments during the experiment. The selected samples include five types of emotions: happiness, fear, anger, boredom, and uncertainty, all of which have been labeled, including 238 samples for gestures and expressions. The experiment in this paper is implemented under Windows XP system (dual-core CPU2.53GHz memory 2G), using VC6.0+OpenCV1.0. In the experiment, the facial expression picture frame and the upper body posture picture frame are uniformly sized as 96×96 pixels and 128×96 pixels respectively. Part of the image after the facial expression picture and the gesture picture are unified in size is as follows: Figur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dual-mode video emotion recognition method with composite spatio-temporal features, which comprises the following steps: 1. Expand the existing volume local binary mode algorithm into a spatio-temporal ternary mode, and obtain the spatio-temporal local 3-valued mode of human facial expression and upper body posture; value mode moment texture feature; 2 in order to make up for the lack of image edge and direction information in the texture feature, the present invention further integrates the three-dimensional gradient direction histogram feature to enhance the description of the emotional video, and combines the two features into a composite spatio-temporal feature; 3 uses The D-S evidence joint rule fuses the information of the two modalities to obtain the result of emotion recognition. The invention fully describes the emotional video by using the compound spatio-temporal features, reduces the time complexity, and improves the accuracy of emotion recognition.

Description

technical field [0001] The invention relates to a feature extraction method and classification discrimination, belonging to the field of pattern recognition, in particular to an emotion recognition method based on multi-feature description and D-S evidence fusion. Background technique [0002] Nowadays, computer vision and artificial intelligence are developing quite rapidly, and human-computer interaction has emerged. Humans urgently hope that computers can have the same emotions as humans and can understand human emotions. This requires the introduction of the emotional dimension into human-computer interaction, so that computers have Emotional perception and recognition. [0003] Emotional expression can be achieved in a variety of ways, mainly including facial expressions, gestures, speech, etc. Among them, facial expressions are obtained by collecting facial images, postures and actions are generated through movements such as hands and heads, and speech is also an impo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00
CPCG06V40/173G06V40/174G06V40/16G06V40/168G06V40/10
Inventor 王晓华侯登永彭穆子李艳秋胡敏任福继
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products