Video content description method based on semantic information guidance

A technology for semantic information and video content, applied in special data processing applications, instruments, electrical digital data processing, etc., can solve the problems of cumbersome research methods and chaotic timing, and achieve the effect of improving accuracy and ensuring temporal and spatial correlation.

Active Publication Date: 2017-08-11
HANGZHOU DIANZI UNIV
View PDF12 Cites 77 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] In order to overcome the cumbersome research methods in the existing field of video content description, the timing confusion caused by the fusion of multiple features, and further improve the accuracy of description, the present invention intends to propose a new method based on semantic information guidance on the basis of the above two methods. Video content description method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video content description method based on semantic information guidance
  • Video content description method based on semantic information guidance
  • Video content description method based on semantic information guidance

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0070] Combine below figure 2 , giving video content description specific examples of training and testing implementation, the detailed calculation process is as follows:

[0071] (1) There are 280 frames in a certain video, which can be divided into 28 blocks, and the first frame of each block is taken, so the video can be converted into 28 consecutive pictures;

[0072] (2) According to the method listed in formula (1), use the pre-trained convolutional neural network to extract the static features in the 28 pictures and the dynamic features of the entire video, and use the cascade method to fuse the two;

[0073] (3) Use the pre-trained faster-rcnn to perform fast target detection on 28 pictures to form 28 81-dimensional semantic information vectors;

[0074] (4) Concatenate the semantic information vector of each frame with the original feature vector extracted by CNN+3-D CNN to form a 1457-dimensional semantic feature vector. According to the methods listed ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video content description method based on semantic information guidance. The method comprises the steps that (1) a video format is preprocessed; (2) semantic information for guidance is established; (3) the weight of each semantic feature vector [A,XMS<(i)>] is calculated; (4) the semantic feature vectors [A,XMS<(i)>] are decoded; and (5) a video description model is tested. According to the method, by use of a faster-rcnn model, key semantic information on each frame of an image can be quickly detected, and the key semantic information is added into original features extracted through a CNN, so that the feature vector input into an LSTM network at each time node has semantic information; thus, in the decoding process, video content space-time relevancy is guaranteed, and language description accuracy is improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision and natural language processing, and relates to a video content description method guided by semantic information. Background technique [0002] 1. Video content description [0003] Previous research work on video content description is mainly divided into two directions: [0004] 1. A method based on feature recognition and language template filling. Specifically, the method is divided into two steps. First, the video is converted into an image collection with continuous frames according to a certain time interval; second, a series of feature classifiers pre-trained in a large-scale image training set are used to convert The static features and dynamic features in the video are classified and marked. Specifically, these features can be subdivided into entities, entity attributes, interactive relationships between entities, and scenes, etc.; finally, according to the characteristics of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/46G06K9/00
CPCG06F16/7847G06V20/41G06V10/424
Inventor 涂云斌颜成钢冯欣乐李兵楼杰栋彭冬亮张勇东王建中
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products