Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Suicide emotion perception method based on multi-modal fusion of voice and micro-expressions

A technology of emotion perception and micro-expression, applied in the field of emotion perception, can solve the problems of insufficient reliability of research results and achieve the effect of convenient operation and high performance

Pending Publication Date: 2020-12-18
SOUTH CHINA UNIV OF TECH
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It can be seen that most emotion recognition methods focus on the study of a single factor, which is obviously one-sided, because some people can control their inner emotions well without expressing them, which will make the research results less reliable

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Suicide emotion perception method based on multi-modal fusion of voice and micro-expressions
  • Suicide emotion perception method based on multi-modal fusion of voice and micro-expressions
  • Suicide emotion perception method based on multi-modal fusion of voice and micro-expressions

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0042] A suicide emotion perception method based on the multi-modal fusion of speech and micro-expressions, such as figure 1 shown, including the following steps:

[0043] S1, use Kinect with infrared camera to collect video and audio;

[0044] S2. Using different methods to analyze and convert image frames and audio in the video into corresponding feature texts;

[0045] For the acquired audio, different feature extractions are performed from the three dimensions of voice content, intonation and speech rate, and converted into three sets of corresponding feature texts; for the acquired image frames, after capturing facial expressions, feature extraction and Dimensionality reduction, and classification by neural network into corresponding emoticon text descriptions.

[0046] Step S2 specifically includes the following steps:

[0047] S2.1. After the audio signal is denoised, the speech is converted into three corresponding feature text descriptions in turn according to the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a suicide emotion perception method based on multi-mode fusion of voice and micro-expression. The method comprises the following steps: collecting videos and audios by using Kinect with an infrared camera; analyzing image frames and audios in the video by using different methods and converting the image frames and audios into corresponding feature texts; fusing the featuretexts, namely performing dimension reduction processing to obtain fused features; and classifying the fusion features by using a SoftMax activation function, and judging whether the emotion belongs tosuicide emotions or not. According to the invention, the multi-modal data is aligned with the text layer. The text intermediate representation and the proposed fusion method form a frame fusing speech and facial expressions. According to the invention, the dimensionality of voice and facial expressions is reduced, and two pieces of information are unified into one component. The Kinect is used for data acquisition, and the invention has the advantages of being non-invasive, high in performance and convenient to operate.

Description

technical field [0001] The invention belongs to the field of emotion perception, in particular to a suicide emotion perception method based on multimodal fusion of voice and micro-expression. Background technique [0002] In daily life, we often encounter some people who can't think about suicide, or even commit suicide because of a small unsatisfactory situation. This will cause huge psychological harm to those who love them, and also bring spiritual harm to the society. and material loss. In fact, these people will have corresponding physiological abnormalities such as speech, body, and expression before committing suicide. If we can carefully observe and understand them through some legitimate techniques, it may save a fresh life. [0003] In addition to mastering the corresponding psychological knowledge and improving the popularization mechanism of psychological courses, it is also an effective method to use scientific and technological means to observe these abnormal ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V40/174G06V20/46G06N3/045G06N3/044G06F2218/08G06F18/2411G06F18/214G06F18/253Y02D10/00
Inventor 杜广龙
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products