Video emotion classification method and system fusing electroencephalogram and stimulation source information

An emotion classification and stimulus source technology, applied in neural learning methods, sensors, character and pattern recognition, etc., can solve the problems of not paying attention to video, not paying attention to video information, and unsatisfactory classification effect, so as to improve efficiency and suppress uselessness. information, and the effect of improving generalization ability

Active Publication Date: 2021-07-09
XI AN JIAOTONG UNIV
View PDF4 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing technology 2 only pays attention to the emotion of the video barrage, but does not pay attention to the information of the video itself. At the same time, it cannot use this method to classify video clips without barrage
Existing technology 3 only pays attention to the EEG signal generated by the user watching the video, but does not pay attention to the information of the video itself, resulting in unsatisfactory classification effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video emotion classification method and system fusing electroencephalogram and stimulation source information
  • Video emotion classification method and system fusing electroencephalogram and stimulation source information
  • Video emotion classification method and system fusing electroencephalogram and stimulation source information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0066] Taking the video of "Global Epidemic Event" on Twitter as an example to illustrate the video emotion classification process of fusing EEG and stimulus information.

[0067] (1) Construct stimulus source-EEG signal dataset

[0068] Video clips from the Internet were collected, including videos with positive, negative and neutral sentiments, with equal numbers of the three. Positive videos select clips from comedy movies, negative videos select clips from tragic movies, and neutral videos select clips from documentaries. Each video is 3-5 minutes long. Wear a 62-channel EEG scanner for the subject. After the signal is stable, let the subject watch the video of the stimulus source. The staff next to him are responsible for playing the video and recording the EEG signal. The videos are played in random order, so that the same subject can Watch continuously, and the video playback interval is 15s, and the interval time is for the subject to rest and calm down. The collect...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video emotion classification method and system fusing electroencephalogram and stimulation source information, and the method comprises the steps: constructing a stimulation source-electroencephalogram signal data set: enabling a subject to watch a video clip, collecting the electroencephalogram signals of the subject when the subject watches a video through an electroencephalogram scanner, and constructing the stimulation source-electroencephalogram signal data set; and constructing a multi-modal feature fusion model: for the training data set, respectively extracting video features and electroencephalogram signal features, and generating a fusion vector by adopting a multi-modal information fusion method based on an attention mechanism. Training a fusion vector classification model: taking the fusion vector as the input of a neural network full connection layer for prediction; updating the weight of the neural network according to the difference between the prediction result and the real label, and training the neural network. Classifying by using the model; collecting an electroencephalogram signal when a subject watches a to-be-classified video; extracting and fusing video features and electroencephalogram signal features; and inputting the fusion vector into the trained neural network to obtain a classification result.

Description

technical field [0001] The invention relates to the field of multi-modal fusion video emotion classification, in particular to a video emotion classification method and system for fusing EEG and stimulus source information. Background technique [0002] Video emotion classification is a hot spot in the research direction of computer vision, and it also has broad application value. In the video recommendation system, by calculating the emotion of the user watching the video and obtaining their emotional preference, the video that is more in line with their preference can be recommended. In public opinion events, obtaining videos under specific hot topics, calculating the emotions of the videos for guidance, and establishing correct public opinion orientation is conducive to creating a harmonious and stable cyberspace environment. In addition, video sentiment classification is also of great significance in video classification and advertisement placement. [0003] Therefore,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08A61B5/378A61B5/16
CPCG06N3/08A61B5/165A61B5/7267G06N3/047G06N3/044G06F18/214G06F18/2415G06F18/253
Inventor 刘欢李珂秦涛郑庆华张玉哲陈栩栩
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products