Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video bullet screen emotion analysis method based on multi-scale attention convolutional coding network

A convolution coding and attention technology, applied in the field of deep learning and sentiment analysis, can solve problems such as insufficient extraction of emotional polarity, disadvantages of extracted sample features, errors, etc.

Pending Publication Date: 2020-05-12
JIANGNAN UNIV
View PDF0 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, if the sample contains multiple target words and these target words have a variety of different emotional polarities, using CNN alone to process the sample is far from achieving the desired effect
Although CNN can extract the sample information and associated information in the window, it cannot process the entire sample, which leads to the fact that CNN may not be able to fully extract the emotional polarity of the sample or even generate errors.
Moreover, compared with the traditional RNN model, the existing CNN model is still slightly inferior in extracting sample features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video bullet screen emotion analysis method based on multi-scale attention convolutional coding network
  • Video bullet screen emotion analysis method based on multi-scale attention convolutional coding network
  • Video bullet screen emotion analysis method based on multi-scale attention convolutional coding network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0080] In order to make the purpose, technical solutions and advantages of the present invention clearer, in combination with the technical solutions and accompanying drawings given above, the specific usage of the present invention is further described.

[0081] Such as image 3 As shown, the video barrage sentiment analysis method based on multi-scale attention convolution coding network, the specific steps are as follows:

[0082] Step 1. Collect video files, use CRNN to extract video bullet chat samples, sort out the target words in each sample sentence, and mark the emotional bias of each target word respectively to obtain a bullet chat sample data set. Sentence samples and target words in the data set are preprocessed by GloVe respectively to make them into a vector form that can be easily handled by the neural network. The loss function of the GloVe neural network is:

[0083]

[0084] in, with To finally solve the word vector, f(X ij ) is the weight function. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video bullet screen sentiment analysis method based on a multi-scale attention convolutional coding network, and belongs to the field of natural language processing and sentiment analysis. An attention mechanism of a multi-level structure is combined with the convolutional neural network, and the defect that the convolutional neural network is difficult to extract long-dependence information of a text is overcome. The method comprises the steps of respectively performing multichannel feature learning of a convolutional neural network on an input sample and a target word for encoding; extracting parts, related to the target words, in the sentences by utilizing an attention mechanism, splicing the feature vectors coded by the final parts to obtain a multi-scale comprehensive feature vector, and inputting the multi-scale comprehensive feature vector into a classifier as a final vector to perform sentiment classification.

Description

technical field [0001] The invention belongs to the field of deep learning and sentiment analysis, and proposes a Hierarchical Attention Convolution Neural Network (HACNN) model using a multi-scale attention mechanism, which can effectively analyze the sentiment information corresponding to target words in complex video barrage, In this way, sentiment analysis can be performed on video barrage samples. Background technique [0002] With the rapid development of the Internet, video barrage has emerged as a new way of commenting and communicating. Viewers can express their views synchronously while watching videos on the Internet. Since bullet chatting is a new way of commenting, so far there are few studies on this aspect, but bullet chatting contains a lot of real-time emotional information about the video, and video feedback users' attitudes and emotions towards each part of the video. Emotional labeling of video key frames according to the emotion of the barrage can facil...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06F40/284G06N3/04
CPCG06N3/045G06F18/213G06F18/214
Inventor 宋威温子健
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products