Individual emotion recognition method fusing expressions and postures

An emotion recognition and gesture technology, applied in the field of video analysis, can solve the problems of low accuracy of individual emotion recognition in public spaces, difficult adjustment of parameters, manual selection of features, etc., to reduce the time required for training, strong adaptability, and classification results. accurate effect

Active Publication Date: 2022-03-11
SICHUAN UNIV
View PDF8 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a method for individual emotion recognition in a video sequence, which combines deep learning with video individual emotion, gives full play to the advantages of deep learning self-learning, and effectively integrates emotional information expressed by facial expressions and body postures, which can solve the problem of At present, the parameters of shallow learning are difficult to adjust, and features need to be manually selected, and the accuracy of individual emotion recognition in public spaces is not high.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Individual emotion recognition method fusing expressions and postures
  • Individual emotion recognition method fusing expressions and postures
  • Individual emotion recognition method fusing expressions and postures

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The present invention is described in further detail by examples below. It must be pointed out that the following examples are only used to further illustrate the present invention, and cannot be interpreted as limiting the protection scope of the present invention. , making some non-essential improvements and adjustments to the present invention for specific implementation shall still belong to the protection scope of the present invention.

[0038] figure 1 Among them, the individual emotion recognition method that integrates expressions and gestures, specifically includes the following steps:

[0039] (1) Divide the video sequence data set into negative, neutral and positive three different individual emotion categories, divide the graded data set into training set and test set according to the ratio of 5:5, and make data labels.

[0040] (2) The video sequences of each data set in the above step (1) are subjected to face detection processing to obtain face sequence...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an individual emotion recognition method fusing expressions and postures, and mainly relates to classification of individual sequence emotions by using a multi-mode individual emotion recognition network. The method comprises the following steps: constructing a multi-modal individual emotion recognition network (including two channels for processing an expression sequence and a posture sequence), extracting expression features and posture features in a video sequence in parallel by using the network, and finally fusing the two features to obtain individual sequence emotion classification. According to the method, the self-learning ability of deep learning is fully exerted, the limitation of manual feature extraction is avoided, and the adaptive capacity of the method is higher. And the structural features of the multi-stream deep learning network are used for parallel training and prediction, and finally classification results of multiple sub-networks are fused, so that the accuracy and the working efficiency are improved.

Description

technical field [0001] The invention relates to the problem of individual sequence emotion recognition in the field of video analysis, in particular to a video analysis method for classifying individual sequence emotions by a multi-stream neural network integrating expressions and gestures. Background technique [0002] Emotion recognition aims to enable computers to have the ability to perceive and analyze human emotions and intentions, so as to play a role in entertainment, medical care, education, public safety and other fields. Emotional expressions do not exist in isolation, and among them, the combined visual channel of facial expression and body posture is considered to be an important channel for judging human behavior cues. Facial expressions can most intuitively reflect people's emotional state and mental activities, and are an important way to express emotions. However, irrelevant factors in the real environment will have a great impact on the recognition of facia...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V20/40G06V40/16G06V10/764G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/24Y02D10/00
Inventor 卿粼波文虹茜杨红任超李林东
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products