Method for generating query-oriented video abstract by using convolutional multilayer attention network mechanism

A technology of video summarization and attention, applied in image communication, selective content distribution, electrical components, etc., can solve the problems of long model calculation time, inability to easily deal with long-distance relationship of video, etc., and achieve the effect of reducing the number of parameters

Active Publication Date: 2020-03-27
ZHEJIANG UNIV
View PDF15 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, sequential models typically perform calculations step by step, and as the length of the video increases, the calculation time of the model also incre

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for generating query-oriented video abstract by using convolutional multilayer attention network mechanism
  • Method for generating query-oriented video abstract by using convolutional multilayer attention network mechanism
  • Method for generating query-oriented video abstract by using convolutional multilayer attention network mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0065] The invention is experimentally validated on the query-oriented video summarization dataset proposed in (Sharghi, Laurel, and Gong 2017). The dataset consists of 4 videos containing different daily life scenes, each lasting 3 to 5 hours. The dataset provides a set of concepts for user queries, of which the total number of concepts is 48; there are 46 queries in the dataset, each query consists of two concepts, and there are four scenarios for the query, that is, 1) all the concepts in the query appear In the same video; 2) all the concepts in the query appear in the video but not the same photo; 3) some concepts in the query appear in the video; 4) all the concepts in the query do not appear in the video. The dataset provides annotations annotated on video shots, with each shot labeling several concepts. Then the present invention carries out the following preprocessing for the video summary data set for query:

[0066] 1) Sample the video to 1fps, then resize all fra...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for generating a query-oriented video abstract by using a convolutional multilayer attention network mechanism. The method comprises the following steps: 1) cutting agroup of videos to construct video clips; and extracting visual features of each lens of the video by using a full convolutional neural network; 2) learning semantic relationships among all shots in the video clip by using a local self-attention mechanism, and generating visual features of the video shots; 3) learning a semantic relationship between different segments of the video by utilizing a query-related global attention mechanism, and generating query-oriented video shot visual features; 4) calculating a similarity score between the video shot and the user query to generate a query-related video abstract. Compared with a common video abstract solution, the method has the advantages that a convolutional multi-layer attention mechanism is utilized, video visual features related to query can be reflected more accurately, and a more consistent video abstract is generated. The effect obtained in the video abstraction is better than that obtained in a traditional method.

Description

technical field [0001] The present invention relates to video summarization, and more particularly to a method for generating query-relevant video summaries using convolutional multi-layer attention networks. Background technique [0002] Automated video summarization techniques are widely used in many fields, such as motion recognition, surveillance video analysis, visual diary creation based on personal life log videos, and video preview for video sites. [0003] Existing methods for video summarization focus on finding the most diverse and representative visual content, lacking consideration of user preferences. It can be divided into two areas: (1) general video summarization (2) query-oriented video summarization. General video summarization generates a compact version of the original video by selecting the highlights of a long video and removing redundant content of the video; query-oriented video summarization not only removes redundant parts of the video, finds key ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N21/845H04N21/8549
CPCH04N21/8456H04N21/8549
Inventor 赵洲许亦陈肖舒文
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products