Video abstract generation method fusing local target features and global features

A technology of target features and global features, applied in neural learning methods, computer components, biological neural network models, etc., can solve problems such as lack of visual expressiveness of representational features, neglect of local target features, and neglect of interactive relationships between targets, etc., to achieve Detail-rich, performance-boosting, expressive-rich effects

Active Publication Date: 2021-07-20
XI AN JIAOTONG UNIV
View PDF17 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the existing methods ignore the local target features in the video, and also ignore the interactive relationship between targets, so the generated representation features lack sufficient visual expressiveness.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video abstract generation method fusing local target features and global features
  • Video abstract generation method fusing local target features and global features
  • Video abstract generation method fusing local target features and global features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The implementation of the present invention will be described in detail below in conjunction with the drawings and examples.

[0019] Such as figure 1 As shown, the present invention is a method for generating a video abstract that fuses local target features and global features, including:

[0020] Step 1, extract the local target features of the video

[0021] The local target features include the visual features of the target, the trajectory features of the target, and the category label features of the target. Refer to figure 2 , the extraction of local target features specifically includes:

[0022] Step 1.1: Segment and sample the original video data according to the video scene to obtain a collection of pictures.

[0023] Since videos usually contain multiple scenes, and there is no temporal relationship between objects in different scenes, multiple complex scenes are an important obstacle for introducing image-based object detection models into videos. The ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A video abstract generation method fusing local target features and global features comprises the steps: extracting the local target features of a video, wherein the local target features comprise the visual features of a target, the motion track features of the target and the category label features of the target; constructing a local target feature fusion network by using an attention mechanism, and inputting the local target features to obtain fused local target features; and extracting global features of the video from the video by using an encoder in the encoding-decoding framework, introducing the fused local features into the encoding-decoding framework, fusing global feature information and local target feature information of the video, and obtaining representation vectors with richer expressive force; and decoding a corresponding abstract statement according to the representation vector. According to the method, video local target features are introduced into a video abstract generation model of a coding-decoding framework, the visual expressive force representing the features is enriched, then final text generation is optimized, and related semantic text description is generated based on an input video.

Description

technical field [0001] The invention belongs to the technical fields of artificial intelligence, computer vision and natural language processing, relates to video comprehension and video summary generation, and in particular to a video summary generation method that integrates local target features and global features. Background technique [0002] With the continuous development and maturity of artificial intelligence technology in the field of computer vision and natural language processing, the intersecting task in the above fields - video summarization task has gradually become one of the research hotspots in the field of artificial intelligence. The task of video summarization generation refers to, given a video, using a computer to generate a text to describe the content in the video (currently mainly in English), so as to achieve the purpose of understanding the content of the video. The task of video summarization is an important branch of video understanding tasks. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/47G06V20/49G06N3/044G06N3/045G06F18/253Y02D10/00
Inventor 杜友田张光勋
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products