Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video statement positioning method based on multi-stage aggregation Transformer model

A multi-stage aggregation and video technology, applied in neural learning methods, biological neural network models, character and pattern recognition, etc., can solve problems such as independence, inability to accurately match and locate at different stages, and discard stage information

Active Publication Date: 2021-03-12
GUIZHOU UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the average pooling operation completely discards the stage information, and cannot perform precise matching on different stages to achieve precise positioning.
Although full convolution operations or RoI Pooling operations can characterize different stages to a certain extent, they do not rely on explicit stage-specific features, so they are also deficient in more precise positioning.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video statement positioning method based on multi-stage aggregation Transformer model
  • Video statement positioning method based on multi-stage aggregation Transformer model
  • Video statement positioning method based on multi-stage aggregation Transformer model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] Specific embodiments of the present invention will be described below in conjunction with the accompanying drawings, so that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.

[0061] figure 1 It is a flow chart of a specific embodiment of the video sentence location method based on the multi-stage aggregation Transformer model of the present invention.

[0062] In this example, if figure 1 As shown, the video sentence localization method based on the multi-stage aggregation Transformer model includes the following steps:

[0063] Step S1: Video slice feature, word feature extraction

[0064] In this example, if figure 2 As shown, the video is evenly divided into N time points according to time, and at each time point, a video slice (composed...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video statement positioning method based on a multi-stage aggregation Transformer model, and in the video statement Transformer model, each video slice or word can adaptivelyaggregate and align information from all other video slices or words in two modes according to the semantics of the video slice or word. Through multi-layer superposition, finally obtained video statement joint representation has rich visual language clue capture capability, and finer matching can be realized. In the multi-stage aggregation module, the stage characteristics of the starting stage,the stage characteristics of the intermediate stage and the stage characteristics of the ending stage are connected in series to form the characteristic representation of the candidate segment. Because the obtained characteristics representation captures the specific information of different stages, the method is very suitable for accurately positioning the starting position and the ending position of the video clip. The two modules are integrated together to form an effective and efficient network, and the accuracy of video statement positioning is improved.

Description

technical field [0001] The invention belongs to the technical field of video sentence positioning and retrieval, and more specifically relates to a video sentence positioning method based on a multi-stage aggregation Transformer model. Background technique [0002] Video localization is a fundamental problem in computer vision systems with wide-ranging applications. In the past decade, a lot of research and related application system development have been carried out on video action localization. In recent years, with the rise of multimedia data and the diversification of people's needs, the problem of sentence localization in videos (video sentence localization) has gradually become important. The purpose of video sentence location is to locate a certain video segment corresponding to the sentence to be queried in a very long video. Compared with video action localization, sentence localization has greater challenges and broader application prospects, such as video retrie...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06N3/047G06N3/045G06F18/22G06F18/214
Inventor 杨阳张明星
Owner GUIZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products