A combined video description method based on multi-modal features and multi-layer attention mechanism

A video description and attention technology, applied in the field of video description, can solve the problems of ignoring the multi-modal features of the video and failing to use the attention mechanism effectively, and achieve the effect of improving the accuracy

Active Publication Date: 2019-02-15
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF12 Cites 68 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the attention mechanism has been widely used in sequence learning methods. However, the current use of the attention mechanism is only limited to the single-modal feature after the feature extraction of the video, while ignoring the multi-modal features of the video itself. Therefore, it also makes the attention mechanism not be used effectively

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A combined video description method based on multi-modal features and multi-layer attention mechanism
  • A combined video description method based on multi-modal features and multi-layer attention mechanism
  • A combined video description method based on multi-modal features and multi-layer attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the implementation methods and accompanying drawings.

[0032] see figure 1 , the present invention extracts multi-modal data features for video and combines multi-modal data fusion with attention mechanism, and the specific steps of generating semantic description are as follows:

[0033] S1. Data preprocessing.

[0034] Segment the sentence of the sentence described in the video, and count all the words that appear to form a vocabulary V; then add words to the vocabulary V and words As the beginning and end of the sentence; at the same time, add at the beginning of each video description sentence , add at the end of the sentence .

[0035] Each word is then encoded to get a binary vector representation of each word. That is to say, each word is expressed in the form of one-hot (one...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a combined video description method based on multi-modal features and multi-layer attention mechanism. Firstly, the invention counts the words appearing in the description sentence to form a vocabulary, and numbers each word to facilitate vector representation. Then three kinds of feature data are extracted, including semantic attribute feature, Image information features extracted by 2D-CNN and video motion information features extracted by 3D-CNN, and then multi-modal data dynamic fusion through the multi-layer attention mechanism to obtain visual information, and then according to the current context, adjust the use of visual information; Finally, according to the current context and visual information, the words described in the video are generated. After the multi-modal features of the video are fused through the multi-layer attention mechanism, the invention generates the semantic description of the video based on the multi-modal features of the video, thereby effectively improving the accuracy of the video description.

Description

technical field [0001] The invention belongs to the field of video description, and in particular relates to a combined video description method based on multi-modal features combined with a multi-layer attention mechanism. Background technique [0002] At present, the schemes used to generate description sentences for videos are mainly divided into template-based language methods and sequence learning methods. [0003] Among them, template-based language methods first align each sentence segment (e.g., subject, verb, object) with detected words from visual content, and then use predefined language templates to generate sentences that are highly dependent on the sentence template. The sentence pattern generated by this method is single, and sentences other than the language template cannot be generated. [0004] The sequence learning method is to design an encoding-decoding network, first use CNN (convolutional neural network) to encode the video, and then decode it through...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/73G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06N3/045
Inventor 田玲罗光春惠孛刘贵松杨彬
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products