Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video description method based on multi-feature fusion

A multi-feature fusion and video description technology, applied in special data processing applications, instruments, electrical digital data processing, etc.

Active Publication Date: 2017-10-17
SUZHOU UNIV
View PDF8 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] The purpose of the present invention is to propose a multi-feature fusion video description method in order to solve the problems existing in the existing video description method, which can better extract more robust spatio-temporal features, and at the same time add Overall features, in order to establish more connections between visual information and words, and finally use the word2vec word vector method to replace the one-hot vector word representation, to establish more connections between words, so as to better improve video description performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video description method based on multi-feature fusion
  • Video description method based on multi-feature fusion
  • Video description method based on multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0056] Embodiment: combine below Figure 1 to Figure 8 The video description method based on multi-feature fusion provided by the present invention is specifically described as follows:

[0057] The overall flowchart and frame diagram of the inventive method are respectively as figure 1 and figure 2 As shown, the deep spatio-temporal features of videos are extracted by fusing traditional CNN features and SIFT stream features. Then, according to the extracted features, the S2VT sentence generation model with integrated features is used to generate corresponding sentence descriptions. Finally, the word2vec word vector is used to replace the one-hot vector word representation to optimize the sentence generation model.

[0058] In this embodiment, BLEU and METEOR are used to evaluate the video description method and performance, and the data set used in the demonstration experiment is: MSVD (Microsoft Research Video Description), also known as Youtube2Text. MSVD is c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video description method based on multi-feature fusion. The method is characterized in that (1) deep spatial-temporal features of a video are extracted by fusing traditional CNN features and SIFT flow features; (2) an S2VT sentence generation model with added average pooling features being video overall features is adopted to generate corresponding sentence description according to the deep spatial-temporal features extracted in the step (1); and (3) a word2vec word vector method is adopted to replace one-hot vector word representation to optimize the sentence generation model in the step (2). The method has the advantages that more robust spatial-temporal features can be better extracted through multi-feature fusion; meanwhile, the average pooling features are added into the sentence generation model, so that more relations are established between visual information and words; finally, the word2vec word vector method is adopted to replace one-hot vector word representation, so that more relations are established between words, and video description performance is effectively improved.

Description

technical field [0001] The invention relates to video description technology, in particular to a video description method based on multi-feature fusion. Background technique [0002] Concept-Basic concept: Video description refers to describing the semantic information in the video with a natural language sentence based on the visual information of the given video. [0003] Purpose and meaning: The purpose of video description is to learn the semantic information contained in the video from the video information and describe it in natural language. It has wide application value in many fields, such as video retrieval and video annotation based on semantic content, descriptive video service, blind navigation and automatic video surveillance, etc. In recent years, with the rapid development of technologies such as the Internet and multimedia, the amount of visual data has grown exponentially, and the technology of learning semantic information from visual information has grad...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30G06F17/22G06K9/00G06K9/62
CPCG06F16/739G06F40/14G06V20/47G06F18/253
Inventor 刘纯平徐鑫林欣刘海宾季怡
Owner SUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products