Unlock instant, AI-driven research and patent intelligence for your innovation.

Video analysis method based on transfer learning and video frame association learning

A technology of transfer learning and analysis methods, which is applied in the field of automatic pixel-level analysis of video content, which can solve problems such as black holes and achieve the effect of solving manual annotation.

Active Publication Date: 2019-08-20
XI'AN INST OF OPTICS & FINE MECHANICS - CHINESE ACAD OF SCI
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But in this process, the displacement vector is usually calculated by the optical flow algorithm, and the optical flow algorithm will produce a "black hole" phenomenon, that is, the pixel has no label information area, because the optical flow prediction algorithm is a non-unitary mapping and non-surjective process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video analysis method based on transfer learning and video frame association learning
  • Video analysis method based on transfer learning and video frame association learning
  • Video analysis method based on transfer learning and video frame association learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] refer to figure 2 , the steps that the present invention realizes are as follows:

[0045] Step 1. Use motion estimation and optical flow field estimation for video frame association learning.

[0046] (1a) Calculate the forward mapping function, and use this as a basis to estimate the label of the next frame.

[0047]

[0048] Among them, r i t Represents the i-th superpixel in the t-th frame of the video, L(·) represents the category information of the superpixel, and f(·) is the forward mapping function.

[0049] (1b) Calculate the reverse mapping function, and use this as a basis to perform cross-validation on the labels of the previous frame.

[0050]

[0051] (1c) Construct the energy function with the above two items, as follows:

[0052]

[0053] The label information of the video is obtained through the above formula, including some pixel unlabeled information, such as figure 1 shown;

[0054] Step 2, for the "black hole" phenomenon generated i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video content analysis method based on transfer learning and video frame association learning, mainly solving the problems that the existing video content analysis method needs a lot of manual marks and a black hole phenomenon exists in video analysis. The video content analysis method comprises the following implementation steps: (1) carrying out video frame migration ontags according to motion prediction and an optical flow analysis algorithm; (2) carrying out cross-media transfer learning on the black hole phenomenon generated in the above step by using an existing image annotation data set; (3) carrying out priori knowledge modeling on object spatial distribution in a single video frame by utilizing a Markov random field model; and (4) uniformly solving the three steps under the maximum posterior probability model to obtain a final video analysis result. According to the invention, the space-time domain information in the video is fully utilized, and thetransfer learning method transfers large-scale labeled image data information to a video domain to carry out supplementary drawing on a black hole, so that a more accurate pixel-level automatic labeling result of the video content is obtained.

Description

technical field [0001] The invention belongs to the technical field of information processing, and in particular relates to a pixel-level automatic analysis method for video content, which can be applied to the fields of public security management, film and television creation, multimedia technology and the like. Background technique [0002] Vision is the most important means for humans to perceive information, and visual data accounts for more than 80% of all data received by humans. Therefore, the semantic understanding of visual data (including image data and video data) has become a research hotspot in the intelligent processing of computer data. In real life, visual data semantic understanding also has a wide range of applications, such as: content-based image retrieval, 3D reconstruction, automotive driver assistance systems, etc. [0003] In recent years, semantic understanding, as an important content of visual data processing, has received more and more research. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00
Inventor 袁媛卢孝强牟立超
Owner XI'AN INST OF OPTICS & FINE MECHANICS - CHINESE ACAD OF SCI