Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual reality video emotion recognition method and system based on time sequence characteristics

A technology of virtual reality and emotion recognition, applied in neural learning methods, character and pattern recognition, instruments, etc., to achieve the effects of avoiding noise interference, reducing data subjectivity, and reducing individual differences

Pending Publication Date: 2022-06-03
SOUTH CHINA UNIV OF TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problem of lack of cross-paradigm continuous emotion labeling regression model in the time dimension in the virtual reality scene video emotion recognition level, starting from the establishment of a virtual reality scene audio and video continuous emotion data set, the present invention proposes a virtual reality video emotion recognition based on temporal features method and system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual reality video emotion recognition method and system based on time sequence characteristics
  • Virtual reality video emotion recognition method and system based on time sequence characteristics
  • Virtual reality video emotion recognition method and system based on time sequence characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] like figure 1 As shown, this embodiment provides a virtual reality video emotion recognition method based on time series features, and the method mainly includes the following steps:

[0028] S1. Establish a virtual reality scene audio and video data set with continuous emotional labels. The content of the data set includes manually extracted continuous emotional labels, audio features, visual features, physiological signal features, etc. (EEG, BVP, GSR, ECG).

[0029] This step builds a virtual reality scene audio and video dataset with continuous emotional labels, such as figure 2 As shown, the specific process includes:

[0030] S11. Collect virtual reality scene videos containing different emotional contents, conduct SAM self-evaluation on the collected N virtual reality scene videos by M healthy subjects, and screen out F virtual reality scenes in each emotional quadrant according to the evaluation scores. scene video.

[0031] S12. Set up a continuous SAM self...

Embodiment 2

[0053] Based on the same inventive concept as Embodiment 1, this embodiment provides a virtual reality video emotion recognition system based on time series features, including:

[0054] The data set establishment module is used to establish a virtual reality scene audio and video data set with continuous emotional labels. The content of the data set includes manually extracted continuous emotional labels, audio features, visual features and physiological signal features;

[0055] The preprocessing module is used for cross-paradigm data preprocessing of the virtual reality scene video to be recognized;

[0056] The feature extraction module performs feature extraction on the preprocessed data, and uses a deep learning network to extract deep features from audio, visual, time series and physiological signals;

[0057] The multi-modal regression model generation and training module trains a single-modal virtual reality scene video emotion regression model, and integrates to gene...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of cross fusion of cognitive psychology, virtual reality technology and emotion continuous recognition, and relates to a virtual reality video emotion recognition method and system based on time sequence characteristics, and the method comprises the steps: building a virtual reality scene audio and video data set with a continuous emotion label; performing cross-normal-form data preprocessing on the virtual reality scene video to be identified; performing feature extraction on the preprocessed data, and extracting depth features from audio, visual, time sequence and physiological signals by using a deep learning network; training a single-mode virtual reality scene video emotion regression model, and fusing, generating and training a multi-mode emotion regression neural network model; and inputting a to-be-recognized virtual reality scene video into the multi-modal emotion regression neural network model, and outputting a continuous emotion regression result. According to the method, a new way can be provided for emotion evaluation of a virtual reality scene video on the basis of multi-modal features of time sequence, vision, audio and physiological signals, and continuous emotion recognition is efficiently and accurately performed.

Description

technical field [0001] The invention belongs to the cross-integration field of cognitive psychology, virtual reality technology and continuous emotion recognition, in particular to a method and system for virtual reality video emotion recognition based on time sequence features. Background technique [0002] Emotion induction and emotion recognition are one of the hotspots in the field of emotion research. They have important applications and research values ​​in the fields of recommendation systems, game design, psychological research, human-computer interaction, and human-computer emotion perception. Virtual reality scenes have been widely used in education, medical care, entertainment, brain-computer interface, etc. due to their high immersion and high sense of substitution. At the same time, they have received extensive attention and research in the field of emotion induction. Continuous emotional evaluation is particularly important. [0003] In the current research on...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V20/40G06K9/62G06N3/04G06N3/08G06V10/77G06V10/80G06V10/82
CPCG06N3/08G06N3/044G06N3/045G06F18/2135G06F18/25Y02D10/00
Inventor 晋建秀王洒洒舒琳
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products