Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video description method based on multi-feature fusion

A technology of multi-feature fusion and video description, applied in the field of video description

Active Publication Date: 2020-11-03
SUZHOU UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] The purpose of the present invention is to propose a multi-feature fusion video description method in order to solve the problems existing in the existing video description method, which can better extract more robust spatio-temporal features, and at the same time add Overall features, in order to establish more connections between visual information and words, and finally use the word2vec word vector method to replace the one-hot vector word representation, to establish more connections between words, so as to better improve video description performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video description method based on multi-feature fusion
  • Video description method based on multi-feature fusion
  • Video description method based on multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0056] Embodiment: combine below Figure 1 to Figure 8 The video description method based on multi-feature fusion provided by the present invention is specifically described as follows:

[0057] The overall flowchart and frame diagram of the inventive method are respectively as figure 1 and figure 2 As shown, the deep spatio-temporal features of videos are extracted by fusing traditional CNN features and SIFT stream features. Then, according to the extracted features, the S2VT sentence generation model with integrated features is used to generate corresponding sentence descriptions. Finally, the word2vec word vector is used to replace the one-hot vector word representation to optimize the sentence generation model.

[0058] In this embodiment, BLEU and METEOR are used to evaluate the video description method and performance, and the data set used in the demonstration experiment is: MSVD (Microsoft Research Video Description), also known as Youtube2Text. MSVD is c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video description method based on multi-feature fusion, which is characterized in that: 1) extracting deep spatiotemporal features of video by fusing traditional CNN features and SIFT stream features; 2) extracting deep spatiotemporal features according to step 1), using Add the S2VT sentence generation model with the average pooling feature as the overall feature of the video to generate the corresponding sentence description; 3) use the word2vec word vector to replace the sentence generation model in step 2) of the one-hot vector word representation optimization. The advantage of this method is that through multi-feature fusion, more robust spatio-temporal features can be better extracted. At the same time, average pooling features are added to the sentence generation model to establish more connections between visual information and words. Finally, word2vec word vectors are used. The method replaces the one-hot vector word representation, establishes more connections between words, and effectively improves the performance of video description.

Description

technical field [0001] The invention relates to video description technology, in particular to a video description method based on multi-feature fusion. Background technique [0002] Concept-Basic concept: Video description refers to describing the semantic information in the video with a natural language sentence based on the visual information of the given video. [0003] Purpose and meaning: The purpose of video description is to learn the semantic information contained in the video from the video information and describe it in natural language. It has wide application value in many fields, such as video retrieval and video annotation based on semantic content, descriptive video service, blind navigation and automatic video surveillance, etc. In recent years, with the rapid development of technologies such as the Internet and multimedia, the amount of visual data has grown exponentially, and the technology of learning semantic information from visual information has grad...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/74G06F40/134G06K9/00G06K9/62
CPCG06F16/739G06F40/14G06V20/47G06F18/253
Inventor 刘纯平徐鑫林欣刘海宾季怡
Owner SUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products