Method for generating video abstract on basis of deep learning technology

A technology of deep learning and video summarization, applied in the field of image processing, can solve problems such as noise interference, fast motion speed, small pixel area, etc., to achieve the effect of ensuring reliability and shortening time-consuming

Active Publication Date: 2014-12-24
北京中科神探科技有限公司
View PDF8 Cites 52 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For surveillance video, the shooting scene of surveillance video is very complicated: in some scenes, there are many vehicles and the movement speed is fast, such as a highway; in some scenes, the moving target occupies a small pixel area on the screen; in some scenes, Uninterested objects such as trees and flags also move due to the wind, etc.; the

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for generating video abstract on basis of deep learning technology
  • Method for generating video abstract on basis of deep learning technology
  • Method for generating video abstract on basis of deep learning technology

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0032] The present invention proposes a method for generating video summaries based on deep learning technology, the method comprising the following steps:

[0033] Firstly, background modeling is performed on the image sequence of the original video to obtain moving foreground blocks, and foreground post-processing is performed on them; secondly, the extracted moving areas are regarded as candidate moving targets, and multi-target tracking technology based on the Hungarian algorithm is used to track These candidate moving targets are tracked, and the candidate targets are divided into two types: formed trajectories and unformed trajectories; again, use a convolutional neural network classifier to further confirm and classif...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for generating video abstract on the basis of a deep learning technology. The method includes modeling backgrounds of video stream frame by frame and acquiring moving foregrounds to be used as candidate moving objects; tracking the candidate moving objects of each frame by the aid of a multi-object tracking algorithm and updating candidate objects which form movement tracks; training object classifiers by the aid of convolutional neural networks, confirming the candidate objects and determining categories of the objects by the aid of the classifiers after real moving objects are confirmed; fitting all the real moving objects and relevant information on a small quantity of images, forming video snapshots and displaying the video snapshots to users. The method has the advantages that the real objects and noise can be accurately differentiated from one another by the aid of the deep learning technology; the objects do not need to be confirmed frame by frame owing to an accurate multi-object tracking technology, accordingly, the computational complexity can be greatly reduced, an omission factor of the objects and a false alarm rate of the noise can be effectively reduced, the video processing speeds can be increased, and the method can be applied to various complicated scenes.

Description

technical field [0001] The present invention relates to the technical field of image processing, and more specifically, to a video summary generation method based on deep learning technology. Background technique [0002] In modern society, video surveillance systems play an important role in all walks of life, and play an important role in maintaining social order, strengthening social management and security; however, with the rapid increase in the number of cameras, the storage of massive surveillance video data and the understanding of events recorded in these videos will consume a lot of manpower and material resources. According to the statistics of ReportLinker, in 2011, there were more than 165 million surveillance cameras in the world, generating 1.4 trillion hours of surveillance data. If 20% of the important surveillance video data needs to be watched manually, more than 100 million laborers will be employed (8 hours a day, 300 days a year). Therefore, condensing ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N21/8549G06T7/20
Inventor 袁飞唐矗
Owner 北京中科神探科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products