Video attribute representation learning method and method for automatically generating video text description

A technology of text description and learning method, applied in the field of computer vision, to achieve the effect of efficient extraction

Active Publication Date: 2017-10-10
ANHUI UNIVERSITY +1
View PDF4 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The second technical problem to be solved by the present invention is to provide a video text description automatic generation method for how to integrate the ext

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video attribute representation learning method and method for automatically generating video text description
  • Video attribute representation learning method and method for automatically generating video text description
  • Video attribute representation learning method and method for automatically generating video text description

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The embodiments of the present invention are described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and specific operating procedures are provided, but the protection scope of the present invention is not limited to the following implementation example.

[0033] A video attribute representation learning method for extracting video semantic information that can be used for automatic generation of video text descriptions, comprising the following steps:

[0034] Step 1) collect a batch of data for the training and testing of the video text automatic description algorithm, and the data requires each video to correspond to several corresponding text descriptions;

[0035] Step 2) The present invention defines all nouns, verbs, and adjectives that appear in the text description content in the training set as the attribute labeling information of the corresponding...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video attribute representation learning method, which comprises the following steps of collecting a batch of data for training and testing of an automatic description algorithm of video text, wherein the data demand that each video corresponds to a few of sentences of corresponding text description; defining all nouns, verbs and adjectives appearing in the text description content in a training set as attribute labeling information of corresponding videos, wherein each video in the training set corresponds to a plurality of attribute tags; and representing a segment of video sequence as a single image, thereby transforming a complicated and difficult video sequence multi-classification problem into a simple single-image multi-label classification problem. The invention further discloses a method for automatically generating video text description on the basis of the video attribute representation learning method. The video attribute representation learning method has the advantages that efficient attribute characteristic representation of extracted videos is provided, by adopting a fusion method disclosed by the invention, the intact method for automatically generating the text description capable of reflecting semantic information of video attributes can be obtained.

Description

technical field [0001] The invention relates to the field of computer vision, and more specifically to a method for automatically generating video text descriptions. Background technique [0002] The automatic generation of video text description refers to the automatic generation of a text description related to the video content through an algorithm given a video sequence. Due to the complexity of video content, traditional search model-based or language model-based algorithms have always been less effective. In recent years, with the development of deep learning technology, algorithms based on convolutional neural network (CNN) plus recurrent neural network (RNN) have achieved exciting results. The basic steps of this series of algorithms are as follows: (1) Extract the feature vector of the video through CNN (two-dimensional convolution or three-dimensional convolution), and then encode the video feature vector into the feature vector required by the language model thro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06K9/00
CPCG06V20/41G06V20/46G06V20/40G06F18/243G06F18/253G06F18/214
Inventor 李腾年福东李飞凤
Owner ANHUI UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products