On-line video abstraction generation method based on depth learning

A technology of video summarization and deep learning, which is applied in the field of online generation of video summaries based on deep learning, and can solve the problem that the video summarization method cannot meet the requirements of online processing of streaming video applications.

Active Publication Date: 2014-10-22
HANGZHOU HUICUI INTELLIGENT TECH CO LTD
View PDF7 Cites 51 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] A large number of unstructured videos are continuously generated in important fields such as intelligent transportation and security control, and traditional video summarization methods cannot meet the application requirements of online processing streaming video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • On-line video abstraction generation method based on depth learning
  • On-line video abstraction generation method based on depth learning
  • On-line video abstraction generation method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] Refer to attached figure 1 , to further illustrate the present invention:

[0043] 1. After obtaining the original video data, perform the following operations:

[0044] 1) The video is evenly divided into a group of small frame blocks, each frame block contains multiple frames, and the statistical features of each frame image are extracted to form a corresponding vectorized representation;

[0045] 2) Pre-train the multi-layer deep network of video frames to obtain the nonlinear representation of each frame;

[0046] 3) Select the first m frame blocks as the initial simplified video, and reconstruct it through the group sparse coding algorithm to obtain the initial dictionary and reconstruction coefficients;

[0047]4) Update the deep network parameters according to the next frame block, and at the same time reconstruct the frame block and calculate the reconstruction error. If the error is greater than the set threshold, add the frame block to the simplified video a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an on-line video abstraction generation method based on depth learning. An original video is subjected to the following operation: 1) cutting the video uniformly into a group of small frame blocks, extracting statistical characteristics of each frame image and forming corresponding vectorization expressions; 2) pre-training video frame multilayer depth network and obtaining the nonlinearity expression of each frame; 3) selecting the front m frame blocks being as an initial concise video, and carrying out reconstruction on the concise video through a group sparse coding algorithm to obtain an initial dictionary and reconstruction coefficients; 4) updating depth network parameters according to the next frame block, carrying out reconstruction and reconstruction error calculation on the frame block, and adding the frame block to the concise video and updating the dictionary if the error is larger than a set threshold; and 5) processing new frame blocks till the end in sequence on line according to the step 4), and the updated concise video being generated video abstraction. With the method, latent high-level semantic information of the video can be excavated deeply, the video abstraction can be generated quickly, time of users is saved, and visual experience is improved.

Description

technical field [0001] The invention belongs to the technical field of video abstract generation, in particular to an online video abstract generation method based on deep learning. Background technique [0002] In recent years, with the increasing popularity of portable devices such as digital cameras, smart phones, and handheld computers, the number of various types of videos has shown a blowout growth. For example, there are tens of thousands of video acquisition devices in important social fields such as intelligent transportation, security monitoring, and public security deployment in a medium-sized city, and the video data generated by these devices reaches PB level. In order to lock the target person or vehicle, the public security traffic police and other personnel need to spend a lot of time watching the tedious surveillance video stream, which greatly affects the efficiency of work and is not conducive to the creation of a safe city. Therefore, efficient selection...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N21/8549G06F17/30
Inventor 李平俞俊李黎徐向华
Owner HANGZHOU HUICUI INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products