Text-to-video cross-modal retrieval method based on multi-face video representation learning

A cross-modal, video technology, applied in video data retrieval, neural learning methods, video data query, etc., can solve problems such as not describing all the content, blurring multi-scene information, affecting the accuracy of text-video retrieval results, etc. To achieve the effect of improving the performance of retrieval

Pending Publication Date: 2022-07-29
ZHEJIANG GONGSHANG UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this traditional model encoding output method has shortcomings: due to the characteristics of video and text, a video may have many different scenes as the photographer moves or the angle of view is switched during the shooting process, and a query text may It does not describe the entire content of the corresponding video, that is, the query text and the video are partially related
If the video is only expressed as a single feature vector, the multi-scene information in the video may be blurred, resulting in an inaccurate representation of the video, which ultimately affects the accuracy of the text-video retrieval results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Text-to-video cross-modal retrieval method based on multi-face video representation learning
  • Text-to-video cross-modal retrieval method based on multi-face video representation learning
  • Text-to-video cross-modal retrieval method based on multi-face video representation learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.

[0048] In order to solve the problem of text-to-video cross-modal retrieval, the present invention proposes a text-to-video cross-modal retrieval method based on multi-faceted video representation learning. In one embodiment, the specific steps are as follows:

[0049] (1) Using different feature extraction methods to extract the features of the two modalities of video and text, respectively.

[0050] (1-1) For a given video, it is pre-specified that j video frames are uniformly extracted from the video every 0.5 seconds. Deep features for each frame are then extracted using a convolutional neural network (CNN) model trained on the ImageNet dataset, such as the ResNet model. In this way, the video can be composed of a series of feature vectors {v 1 ,v 2 ,...,v t ,...,v j } to describe, where v t Represents the feature vector of the t-th f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a text-to-video cross-modal retrieval method based on multi-face video representation learning. The method comprises the following steps: acquiring preliminary features of a video and a text; after the video initial frames are grouped according to different scenes by using a video splitting tool, the display coding branches are input for explicit coding, and explicit multi-surface representation of the different scenes of the video is obtained; inputting the initial features of the video into an implicit coding branch, and carrying out implicit coding on the initial features of the video through a preamble feature multiple attention network to obtain implicit multi-face representation for expressing different semantic contents of the video; fusing the multi-surface codes of the two branches to obtain multi-surface video feature representation; according to the method, multi-face video feature representation and text features are mapped into a public space, a public space learning algorithm is utilized to learn the relevancy between two modals, a model is trained in an end-to-end mode, and text-to-video cross-modal retrieval is achieved. According to the method, the retrieval performance is improved by utilizing a video multi-surface representation thought.

Description

technical field [0001] The invention relates to the technical field of video cross-modal retrieval, in particular to a text-to-video cross-modal retrieval method based on multi-faceted video representation learning. Background technique [0002] In recent years, due to the popularization of the Internet and mobile smart devices and the rapid development of communication and multimedia technologies, massive amounts of multimedia data are created and uploaded to the Internet every day. The rate of growth is increasing, and these multimedia data have become the most important source of information for modern people. Especially for video data, people will more easily upload and share the videos they create, and how to quickly and accurately retrieve the videos required by users is a daunting challenge. Text-to-video cross-modal retrieval is one of the key techniques to alleviate this challenge. [0003] Existing text-to-video cross-modal retrieval assumes that all videos do no...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/73G06F16/783G06N3/04G06N3/08
CPCG06F16/73G06F16/7844G06F16/785G06N3/08G06N3/044G06N3/045
Inventor 董建锋陈先客王勋刘宝龙包翠竹
Owner ZHEJIANG GONGSHANG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products