Video description generation method based on deep learning and probabilistic graphical model

A probabilistic graphical model, video description technology, applied in character and pattern recognition, special data processing applications, instruments, etc., can solve problems such as insufficient additional information, and achieve the effect of accurate video description

Active Publication Date: 2017-06-13
TSINGHUA UNIV
View PDF2 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method only uses the video data set for training, and the additional information used is insufficient, so it has certain limitations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video description generation method based on deep learning and probabilistic graphical model
  • Video description generation method based on deep learning and probabilistic graphical model
  • Video description generation method based on deep learning and probabilistic graphical model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to make the purpose, technical solution and advantages of the present invention clearer, the implementation of the present invention will be described in detail below in conjunction with the drawings and examples.

[0038] The invention recognizes the actions and objects in the video through the fast regional convolutional neural network and the action recognition convolutional neural network, and initially understands the information contained in the video; then uses the conditional random field to find the subject-predicate-object triple with the highest probability , to remove noise objects and actions that affect the judgment result; finally, use the long short-term memory network to convert the subject-verb-object triple into a description.

[0039] A video description generation method based on deep learning and probabilistic graphical model, see figure 1 , the method includes the following steps:

[0040] 101: Utilize existing image datasets to train a ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video description generation method based on deep learning and a probabilistic graphical model. The method systematically includes the steps that an existing image data set training rapid region object recognition convolution neural network model is utilized; an existing video data set training action recognition convolution neural network model is utilized; frame-extracting processing is carried out on a video, the rapid region object recognition convolution neural network model and the action recognition convolution neural network model are utilized for recognizing objects and action in the video, and the main content of the video is basically determined; by the utilization of a conditional random field, a subject-verb-object triad <objects, action and objects> with the maximum probability is found, the noise objects in the video are excluded, and a final description result is more accurate; the subject-verb-object triad is input into a long and short period memory network to input appropriate sentences which are namely description of the input video. The video is converted into the description, in this way, people can understand the content of the video more rapidly, and the retrieval speed of the video is increased ether.

Description

technical field [0001] The invention belongs to the technical field of video description generation, in particular to a video description generation method based on deep learning and a probability graph model. Background technique [0002] With the rapid development of the Internet, multimedia data such as text, voice, image and video have entered the era of big explosion. The popularity of smart devices equipped with rich sensors has promoted the development of user-generated content, and the threshold for generating data has become lower and lower, which has led to an exponential increase in the amount of data stored by Internet companies. Massive data provide basic conditions for scientific research and application. Emerging crowdsourcing methods provide data sets for model training, making data analysis enter a new stage. [0003] In an age of such huge data volumes, the speed at which information in streaming media can be absorbed becomes very important. Compared wit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06F17/30
CPCG06F16/738G06V20/46G06F18/214
Inventor 覃征黄凯王国龙徐凯平叶树雄
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products