Video description method based on high-order low-rank multi-modal attention mechanism

A video description, multi-modal technology, applied in the field of computer vision, can solve the problems of ignoring multi-modal feature correlation information, the impact of video description accuracy, etc., to achieve good application value, improve efficiency, and improve accuracy.

Active Publication Date: 2020-02-21
ZHEJIANG UNIV
View PDF7 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The decoder generally uses a separate cyclic neural network combined with an attention mechanism, but the current attention m

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video description method based on high-order low-rank multi-modal attention mechanism
  • Video description method based on high-order low-rank multi-modal attention mechanism
  • Video description method based on high-order low-rank multi-modal attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0047] In order to make the objectives, technical solutions and advantages of the present invention clearer, the following further describes the present invention in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

[0048] On the contrary, the present invention covers any alternatives, modifications, equivalent methods and schemes defined by the claims in the spirit and scope of the present invention. Further, in order to enable the public to have a better understanding of the present invention, in the following detailed description of the present invention, some specific details are described in detail. Those skilled in the art can fully understand the present invention without the description of these details.

[0049] reference figure 1 In a preferred embodiment of the present invention, the video de...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video description method based on a high-order low-rank multi-modal attention mechanism, which is used for generating short and accurate description for a given video clip. The method specifically comprises the following steps: obtaining a video data set for training a video description generation model, and defining an algorithm target; modeling time sequence multi-modalfeatures in the video data set; establishing a high-order low-rank multi-modal attention mechanism on a decoder based on the time sequence multi-modal characteristics; generating a description of aninput video using the model. The method is suitable for video description generation of a real video scene, and has better effect and robustness for various complex conditions.

Description

technical field [0001] The invention belongs to the field of computer vision, in particular to a video description method based on a high-order low-rank multi-modal attention mechanism. Background technique [0002] In today's society, video has become an indispensable part of human society, it can be said that it is everywhere. Such an environment has made people's research on the semantic content of video has also been greatly developed. At present, most of the research on video is mainly concentrated on lower levels, such as classification, detection and so on. Thanks to the development of recurrent neural networks, the new task of video description generation has also come into view. Given a video clip, use the trained network model to automatically generate a sentence description for the video clip. Its application in the real world is also very extensive. For example, about 100 hours of videos are generated every minute on YouTube. If the generated video resources ar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06N3/045
Inventor 金涛李英明张仲非
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products