Group emotion recognition and abnormal emotion detection method based on dimension emotion model

A technology of emotion recognition and detection methods, applied in character and pattern recognition, acquisition/recognition of facial features, instruments, etc., can solve the problem that discrete models cannot be effectively expressed, and achieve the effect of accurate expression

Active Publication Date: 2021-04-23
CIVIL AVIATION FLIGHT UNIV OF CHINA
View PDF9 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These characteristics of group emotions are not effectively expressed by discrete models.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Group emotion recognition and abnormal emotion detection method based on dimension emotion model
  • Group emotion recognition and abnormal emotion detection method based on dimension emotion model
  • Group emotion recognition and abnormal emotion detection method based on dimension emotion model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0086] In the step S2, an emotion labeling system is designed according to the manual labeling strategy. The system expresses the P-dimensional value by the facial expression of the character model, the A-dimensional value by the vibration degree of the heart, and the D-dimensional value by the size of the villain. value.

[0087] There are two main methods of constructing sentiment datasets: deduction and citation. The way of deduction is that the performer (preferably with professional performance quality) simulates a certain typical emotional type (joy, panic, sadness) through body movements. This method has a clear emotional contrast and strong expressive power, but there is still a certain gap between this form of interpretation and the real emotion, and it requires high performance quality of the performers, so it is not universal. The way of excerpting and citing is to use the method of manual labeling from the video clips of the real scene to evaluate the emotional st...

Embodiment 2

[0094] The method for judging the consistency of the step S4 is as follows: calculate the coefficient of variation, count and evaluate the sample mean μ, sample standard deviation σ and three indicators of the coefficient of variation CV of the PAD data, wherein the coefficient of variation is defined as:

[0095]

[0096] If the coefficient of variation is small, it means that the consistency of the verification label data is low; otherwise, it means that the consistency of the verification label data is high.

[0097] As far as the PAD data of different video clips are concerned, their tag values ​​in the same dimension are counted. If the coefficient of variation is large, it reflects that the degree of dispersion on the unit mean is large, indicating that the consistency and certainty of the volunteers' scoring for this group are low; otherwise, it shows that the consistency and certainty of the volunteers' scoring for this group higher. Generally speaking, for videos ...

Embodiment 3

[0100] The step S5 extracting group motion features includes the extraction of the foreground area, the extraction of the optical flow feature, the extraction of the trajectory feature and the graphic expression of the motion feature; the extraction of the foreground area adopts the improved ViBE+ algorithm. After detection, the tth frame The foreground region is denoted as R t ; The extraction of the optical flow feature adopts the dense optical flow field of GunnerFarneback to carry out visual expression, for the tth frame image, the optical flow offset of the pixel point (x, y) in the horizontal and vertical directions is u and v respectively; the said The extraction of trajectory features uses the iDT algorithm to intensively collect video pixels, and judge the position of the tracking point in the next frame through optical flow, thereby forming a tracking trajectory, expressed as T(p 1 ,p 2 …p L ), wherein L≤15; the graphical expression of the motion feature adopts the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a group emotion recognition and abnormal emotion detection method based on a dimension emotion model, relates to the technical field of intelligent emotion recognition. The method comprises the steps of: disclosing a position relationship of six typical emotions in a PAD space based on a cognitive psychology PAD three-dimensional emotion model by creating a group emotion video data set through data collection and manual annotation; creating an emotion prediction model based on group behaviors, and mapping the group motion features into three-dimensional coordinates in a PAD space; and constructing an abnormal emotion classifier, and when two abnormal emotions of anger and fear are detected, judging that the scene generates an abnormal state. For the group motion video, the continuous change state of the group emotion can be accurately expressed, and the global abnormal state can be effectively identified.

Description

technical field [0001] The invention relates to the technical field of intelligent emotion recognition, in particular to a group emotion recognition and abnormal emotion detection method based on a dimensional emotion model. Background technique [0002] In recent years, with the continuous development of artificial intelligence and deep learning, psychological science and cognitive science, computers can be used to identify, understand, express and communicate human emotions, making computers more comprehensive and intelligent. It has received more and more extensive attention and in-depth exploration from the academic community. For intelligent video surveillance technology, by collecting the language communication, facial expressions, and body movements of the crowd in the scene, we can understand and appreciate their emotions, analyze their emotional state and inner intentions, and infer their next behavioral intentions, so that The computer responds accordingly, so tha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/174G06V40/20G06V20/41G06F18/2411
Inventor 潘磊王艾赵欣刘国春高大鹏袁小珂严宏马婷朱建刚严崇耀卢志伟
Owner CIVIL AVIATION FLIGHT UNIV OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products