Dual-mode video emotion recognition method with composite spatial-temporal characteristic

A technology of spatio-temporal feature and emotion recognition, applied in the field of pattern recognition, can solve the problem of VLBP feature performance degradation

Active Publication Date: 2017-03-22
HEFEI UNIV OF TECH
View PDF7 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These factors will greatly reduce the performance of VLBP features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dual-mode video emotion recognition method with composite spatial-temporal characteristic
  • Dual-mode video emotion recognition method with composite spatial-temporal characteristic
  • Dual-mode video emotion recognition method with composite spatial-temporal characteristic

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0185] In order to verify the effectiveness of the present invention, the only public bimodal database is used in the experiment: FABO expression and posture bimodal database. Since the database itself is not fully marked, the present invention selects 12 people with a large number of samples and relatively uniform emotional categories to carry out related experiments during the experiment. The selected samples include five types of emotions: happiness, fear, anger, boredom, and uncertainty, all of which have been labeled, including 238 samples for gestures and expressions. The experiment in this paper is implemented under Windows XP system (dual-core CPU2.53GHz memory 2G), using VC6.0+OpenCV1.0. In the experiment, the facial expression picture frame and the upper body posture picture frame are uniformly sized as 96×96 pixels and 128×96 pixels respectively. Part of the image after the facial expression picture and the gesture picture are unified in size is as follows: Figur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dual-mode video emotion recognition method with a composite spatial-temporal characteristic. The method comprises the following steps: 1, extending an existing volume local binary pattern algorithm into a spatial-temporal ternary pattern to obtain spatial-temporal local ternary pattern moment texture characteristics of a human face expression and an upper body posture; 2, enhancing the description of an emotional video by further fusing characteristics of a three-dimensional gradient direction histogram in order to compensate for the lack of expression of the texture characteristics for image edges and direction information, and combining the two characteristics into the composite spatial-temporal characteristic; and (3) fusing information of the two modes according to a D-S evidence combination rule to obtain an emotion recognition result. By the use of the composite spatial-temporal characteristic for full description of an emotional video, the method reduces the time complexity and improves the accuracy of the emotion recognition.

Description

technical field [0001] The invention relates to a feature extraction method and classification discrimination, belonging to the field of pattern recognition, in particular to an emotion recognition method based on multi-feature description and D-S evidence fusion. Background technique [0002] Nowadays, computer vision and artificial intelligence are developing quite rapidly, and human-computer interaction has emerged. Humans urgently hope that computers can have the same emotions as humans and can understand human emotions. This requires the introduction of the emotional dimension into human-computer interaction, so that computers have Emotional perception and recognition. [0003] Emotional expression can be achieved in a variety of ways, mainly including facial expressions, gestures, speech, etc. Among them, facial expressions are obtained by collecting facial images, postures and actions are generated through movements such as hands and heads, and speech is also an impo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00
CPCG06V40/173G06V40/174G06V40/16G06V40/168G06V40/10
Inventor 王晓华侯登永彭穆子李艳秋胡敏任福继
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products