Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video Concentration Method Based on Object Size Adaptive

A video condensing and adaptive technology, applied in image data processing, instruments, computing and other directions, can solve the problems of collision and occlusion of moving objects, and achieve the effect of reducing mutual collision and occlusion

Active Publication Date: 2017-08-08
NORTHWESTERN POLYTECHNICAL UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to overcome the shortcomings of existing video concentration methods that are prone to collision and occlusion of moving objects, the present invention provides a video concentration method based on object size self-adaptation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video Concentration Method Based on Object Size Adaptive
  • Video Concentration Method Based on Object Size Adaptive
  • Video Concentration Method Based on Object Size Adaptive

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] refer to figure 1 . The specific steps of the video concentration method based on the self-adaptive object size of the present invention are as follows:

[0042] 1. Synthesize the background image.

[0043]For each frame of input video data, a corresponding background image is synthesized for it. Specifically, an average value is taken for video sequences 30 seconds earlier than the current frame and 30 seconds later than the current frame, and the average image is used as the background corresponding to the current frame. For the frame at the beginning of the original video, if there is no previous frame, only the video sequence later than it is used; for the frame at the end of the original video, only the video sequence earlier than it is used. The background image synthesis formula is

[0044]

[0045] In the formula, B represents the background image to be synthesized, I represents a frame in the selected video sequence, and n represents the total number of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a self-adaptive video concentration method based on an object dimension, aiming at solving the technical problem of an existing video concentration method that movable objects are easy to collide with and shield each other. According to the technical scheme, detection and extraction are performed on the movable objects in an original video by using a polymeric channel characteristic detection algorithm and a background subtraction algorithm, then a time position and a reasonable reduction dimension of the movable objects in a concentrated video are determined by means of designing a new power function; except considering currently proposed unreasonable cases, a dimension reduction penalty term aiming at reduction operation of the object dimension is added into the power function, so as to avoid that the movable objects become difficult to recognize due to over reduction of the dimension; and finally, by using a Poisson image editing method, the extracted movable objects are seamlessly fused into a background video according to the time position and the object dimension calculated in a second phase. According to the self-adaptive video concentration method based on the object dimension, disclosed by the present invention, the situation that the movable objects collide with and shield each other in the concentration video is effectively reduced.

Description

technical field [0001] The invention relates to a video concentration method, in particular to a video concentration method based on object size self-adaptation. Background technique [0002] The document "Y.Pritch, A.Rav-Acha, and S.Peleg, Nonchronological Video Synopsis and Indexing.IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1971-1987, 2008" proposed an object-based video Concentration method. The method first identifies and extracts active objects from the background through a background subtraction algorithm, and then compresses the length of the original video by presenting multiple active objects simultaneously in a short period of time. At the same time, in order to avoid some unreasonable situations in the condensed video, this method designs a specific energy function to limit the position of each active object on the time axis of the condensed video. This energy function mainly includes three penalty terms, and each penalty term corres...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00
CPCG06T2207/10016
Inventor 李学龙卢孝强王之港
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products