Video semantic analysis method

A semantic analysis and video technology, applied in the direction of instruments, biological neural network models, character and pattern recognition, etc., can solve the problems of slow convergence of convolutional neural networks and failure to obtain labels

Active Publication Date: 2016-06-22
JIANGSU KING INTELLIGENT SYST +1
View PDF1 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the convolutional neural network is a supervised learning model, that is, when training the convolutional neural network model, the training data set and the label corresponding to the training data set are required, and the convergence of the convolutional neural network also requires a large number of samples. Continuous iteration, for tasks such as classification and detection of massive video data, it is impossible to obtain the corresponding label of each video
[0003] Aiming at using a convolutional neural network model with supervised training characteristics on video data, although the predecessors proposed an unsupervised pre-training method based on unsupervised training, it solved the problem of slow convergence of traditional convolutional neural networks; Compared with image data, video data will have the same target rotation, scaling, translation and other phenomena in the content, which requires the feature extractor used to be able to capture more complex and invariant features, so how good The extraction of features with strong invariance has become the problem to be solved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video semantic analysis method
  • Video semantic analysis method
  • Video semantic analysis method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] The technical solutions of the present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

[0061] refer to figure 1 and figure 2 As shown, according to a preferred embodiment of the present invention, the video semantic analysis method based on the topology model pre-training convolutional neural network includes the following steps S1: preprocessing the video training set, and constructing a sparse linear decoder; S2: adding topology Establish a topological linear decoder with characteristic constraints, and divide the video training set into image blocks to train the topological linear decoder; S3: Use the parameters of the trained topological linear decoder as the initial parameters of the convolutional layer in the convolutional neural network; S4: Use the multi-fold cross-validation method and establish a key frame set based on the video training set to fine-tune the convolutional neural ne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention provides a video semantic analysis method. The method comprises the following steps: S1, performing preprocessing of a video training set, and constructing a sparse linear decoder; S2, adding topological property constraints to build a topology linear decoder, and performing image blocking processing of the video training set to train the topology linear decoder; S3, taking the parameters of a trained topology linear decoder as initial parameters of a convolution layer in a convolution nerve network; and S4, building a key frame set to perform fin adjustment of the convolution nerve network based on the video training set through adoption of a multifold cross validation mode, and building a general feature extractor on the base of the video data, and inputting the features extracted on the training set and a test set to support the video semantic classification in a vector machine. The model training method provided by the invention has video-type data samples capable of responding to various contents so as to improve the accuracy and the robustness of the model.

Description

technical field [0001] The invention relates to the technical field of video semantic detection, in particular to a video semantic analysis method. Background technique [0002] In order to realize the detection of video semantic concepts, the convolutional neural network model is used to extract the features of the key frame set of the video. The experiment proves that it is different from other manual design feature extraction methods. Distributed features are extracted, that is, the obtained features are data-driven and can be adapted to a wider range of fields. However, the convolutional neural network is a supervised learning model, that is, when training the convolutional neural network model, the training data set and the label corresponding to the training data set are required, and the convergence of the convolutional neural network also requires a large number of samples. Continuous iteration, for tasks such as classification and detection of massive video data, i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/72
CPCG06N3/02G06V20/40G06V30/274
Inventor 詹永照詹智财张建明彭长生
Owner JIANGSU KING INTELLIGENT SYST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products