Video emotion recognition method and device based on time sequence multi-model fusion modeling and medium

An emotion recognition and multi-model technology, applied in character and pattern recognition, neural learning methods, biological neural network models, etc., can solve the problem of low accuracy of video emotion recognition, achieve the effect of improving recognition ability and avoiding interference

Pending Publication Date: 2020-06-19
GUANGZHOU SHURUI INTELLIGENT TECH CO LTD
View PDF3 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention provides a video emotion recognition method based on time-series multi-model fusion modeling to solve the technical problem of low accuracy of existing video emotion r

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video emotion recognition method and device based on time sequence multi-model fusion modeling and medium
  • Video emotion recognition method and device based on time sequence multi-model fusion modeling and medium
  • Video emotion recognition method and device based on time sequence multi-model fusion modeling and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0070] See figure 1 and figure 2 , the preferred embodiment of the present invention provides a video emotion recognition method based on temporal multi-model fusion modeling, at least including the following steps:

[0071] S101. Select a data set in a video emotion database as a training data set, and perform preprocessing on the training data set. Wherein, the preprocessing is to perform data preprocessing on the input original image number, including process...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video emotion recognition method based on time sequence multi-model fusion modeling, and the method comprises the steps: selecting a data set in a video emotion database as atraining data set, and carrying out the preprocessing of the training data set; constructing a convolutional neural network model based on a feature sampling structure according to the preprocessed training data set; constructing a long-short-term memory network model based on an attention mechanism according to the video spatial feature sequence extracted by the convolutional neural network model; and fusing the convolutional neural network model and the long-term and short-term memory network model to obtain a video emotion recognition model. According to the video emotion recognition method based on time sequence multi-model fusion modeling provided by the embodiment of the invention, the accuracy of video emotion recognition can be effectively improved through the video emotion recognition model constructed by fusing the time sequence feature modeling and other models.

Description

technical field [0001] The present invention relates to the technical field of data mining, in particular to a video emotion recognition method, device and storage medium based on time series multi-model fusion modeling. Background technique [0002] The breakthroughs in artificial intelligence technology in the fields of computer vision, speech recognition, and natural language processing have promoted the development of the field of human-computer emotional interaction. The exploration of human-computer emotional interaction methods with emotional understanding and expression ability has gradually become a research hotspot in the field of human-computer interaction. As a cross-field research topic, video emotion recognition research is of great significance for promoting the development of human-computer emotional interaction technology and mining the emotional value of massive video data. [0003] In the research and practice of the prior art, the inventors of the presen...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V40/161G06V40/168G06V40/174G06V20/41G06V20/46G06N3/045G06F18/241G06F18/253
Inventor 李弘曾晓南张金喜
Owner GUANGZHOU SHURUI INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products