Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A spatio-temporal and channel-based multi-attention mechanism video description method

A video description and attention technology, applied in the field of optical communication, can solve the problems of reduced model sentence generation ability, weakened influence, video feature and sentence description modeling, etc.

Active Publication Date: 2018-12-28
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The first problem is that there is no effective use of video features
In the paper, the video features are only used in the first decoding, and the video features are not used in the subsequent moments, which leads to the weakening of the impact of video features on word prediction when the time sequence increases, which will reduce the ability of the model to generate sentences
[0004] A direct solution to this problem is to add video features every time, but since the video features are continuous multiple images, if the mean pooling method is still used to send the decoding model every moment, obviously this is still not effective Take advantage of video features
[0006] The second problem is the consistency of visual content features and sentence descriptions
The first problem is that although the method based on temporal attention improves the utilization of video features, but in a deeper way, this method still does not fully model the relationship between video features and sentence descriptions, which leads to The second problem that comes is how to ensure the consistency of visual content feature sentence description

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A spatio-temporal and channel-based multi-attention mechanism video description method
  • A spatio-temporal and channel-based multi-attention mechanism video description method
  • A spatio-temporal and channel-based multi-attention mechanism video description method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0059] figure 1 It is a principle diagram of the multi-attention mechanism video description method based on space-time and channels of the present invention.

[0060] In this example, if figure 1 As shown in the present invention, a multi-attention mechanism video description method based on space-time and channel can extract powerful and effective visual features from the time domain, space domain and channel respectively, so as to make the representation ability of the model stronger. It is introduced in detail, specifically including the following steps:

[0061] S1. Randomly extract M videos from the video library, and then simultaneously input M videos to the neural network CNN;

[0062] S2. Training neural network LSTM based on attention mechanism

[0063] Set the maximum number of training times to H, and the maximum number of iterations in each round of training to be T; the word vector of the word at the initial moment is w 0 , h 0 Initialize to 0 vector;

[00...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-attention mechanism video description method based on space-time and channel, wherein the video features are extracted through a CNN network, the video features and theoutput of the encoded last time are calculated based on the multi-attention network, thus, the attention weights of the video features in time domain, space domain and channel are obtained, and thenthe three weights are calculated again with the video features to obtain the features of fusion, so that we can obtain more effective video features, and finally, the fusion features are coded and output to obtain a more consistent description with the video content.

Description

technical field [0001] The invention belongs to the technical field of optical communication, and more specifically relates to a multi-attention mechanism video description method based on time, space and channels. Background technique [0002] Video description is a research in two fields of computer vision and natural language processing, which has received great attention in recent years. Venugopalan released a video description model based on the "encoding-decoding" framework in 2014. The encoding model in the paper first uses CNN to extract features for a single video frame, and then adopts two encoding models of mean pooling and time sequence encoding. Although the model has been successfully applied to video description, there are still some problems in the video description model: [0003] The first problem is that video features are not effectively utilized. In the paper, the video features are only used in the first decoding, and the video features are not used ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04
CPCG06V20/46G06N3/045
Inventor 徐杰李林科田野王菡苑
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products