Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video understanding method based on deep learning

A deep learning and video technology, applied in the field of video understanding, to achieve the effect of improving accuracy and improving accuracy

Inactive Publication Date: 2018-04-13
TIANJIN UNIV
View PDF10 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There have been many studies on using convolutional neural networks to process two-dimensional image data, but the method of using deep networks to process video data is still in the stage of improvement

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video understanding method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] A video understanding method based on deep learning of the present invention will be described in detail below with reference to the embodiments and the accompanying drawings.

[0029] Such as figure 1 Shown, a kind of deep learning-based video comprehension method of the present invention comprises the following steps:

[0030] 1) Obtain a model based on the LSTM network through training, including:

[0031] (1) Use the C3D algorithm to obtain image features, including: for each input video image sequence x={x 1 ,x 2 ,...,x t ,...,x n}, where x 1 ,x 2 ,...,x t ,...,x n Corresponding to the 1st frame, the 2nd frame, ..., the tth frame, ..., the nth frame image in the video image sequence x respectively, divide all the frames of the video image sequence x into 8-frame picture groups, and output every 8 frames A C3D fc7 layer data is used as the feature extraction result, and k 4096-dimensional feature vectors are obtained, where k is n÷8 rounded down.

[0032](...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video understanding method based on deep learning. The method comprises the steps that 1 a model based on an LSTM network is acquired through training; a C3D algorithm is usedto acquire image features; a PCA algorithm is used to reduce dimensions; the dimension of a feature vector is reduced from 4096 to 128; time-domain aliasing and normalization are carried out to acquire a normalized feature vector; an MSR-VTT database is used to train in the LSTM network to acquire the LSTM network model; and 2 through the LSTM network-based model, the statement information of a video image sequence to be detected is acquired; a C3D algorithm is used to acquire the feature vector of the video image sequence to be detected; a PCA algorithm is used for dimension reduction, and time domain aliasing and normalization are carried out to acquire a normalized feature vector; and through the LSTM network-based model, a statement output by the video image sequence to be detected isacquired. According to the invention, the accuracy of the existing model can be improved, and an original model can be further optimized based on new data.

Description

technical field [0001] The invention relates to a video understanding method. In particular, it involves a deep learning-based method for video understanding. Background technique [0002] With the rapid development of the Internet, human beings have gradually entered the era of big data. There is a large amount of picture and video data on the Internet. The sources of these data are also different, and most of the data do not have relevant text descriptions. In this way, when we process these data on a large scale, there are considerable difficulties. It is easy for humans to write a corresponding descriptive text based on the content of a picture or video, but it is quite difficult for a computer to perform such a task. The topic of image / video caption (image / video caption) has thus entered people's field of vision. This is a comprehensive problem that combines computer vision, natural language processing and machine learning. It is similar to translating a picture / vi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/46G06V20/41G06F18/217
Inventor 苏育挺刘瑶瑶刘安安
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products