A Video Summarization Method Based on Spatial-Temporal Recombination of Events

A technology of video summarization and activity events, applied in special data processing applications, instruments, electrical digital data processing, etc., can solve the problems of scenes where surveillance video cannot be applied, cannot express video semantic information well, and lose activity information.

Active Publication Date: 2017-02-15
BEIJING UNIV OF POSTS & TELECOMM
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The commonality of the above two video summaries is that they must strictly follow the chronological order and achieve the purpose of quickly browsing the video at the cost of losing a large amount of activity information. Therefore, they cannot express the semantic information of the video well, and cannot be applied to the scene of surveillance video.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Summarization Method Based on Spatial-Temporal Recombination of Events
  • A Video Summarization Method Based on Spatial-Temporal Recombination of Events
  • A Video Summarization Method Based on Spatial-Temporal Recombination of Events

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings.

[0043] The video summary generation method based on the spatio-temporal recombination of active events of the present invention is: first preprocess the original video, remove blank frames, and then perform structural analysis on the preprocessed video: take the active target in the original video as the object and extract it Event videos of all key activity goals, and weaken the time correlation between each activity goal event, and regroup each activity goal event in time according to the principle of non-conflicting activity scope; at the same time, extract the background reasonably according to the user's visual experience Images, generate time-lapse dynamic background video; finally, these active target events and time-lapse dynamic background video are seamle...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video abstraction generation method based on space-time recombination of active events. According to the method, an original video is pre-treated, blank frames are removed, and the video after pretreatment is subjected to structured analysis as follows: moving targets in the original video are taken as an object, videos of all key moving target events are extracted, time correlation between the moving target events is weakened, and time sequence recombination is performed on the moving target events based on the principle that activity ranges are not conflicted; meanwhile, background images are extracted reasonably based on the reference of the visual perception of a user, and a delayed dynamic background video is generated; and finally, the moving target events and the delayed dynamic background video are sutured seamlessly, a video abstraction with short time, concise content and comprehensive information is formed, and a plurality of moving targets can occur simultaneously in the finally generated video abstraction. The video abstraction generation method can generate the video abstraction used for video browsing or searching efficiently and rapidly, and the video abstraction can express semantic information of the video more reasonably and better meets the visual perception of the user.

Description

Technical field [0001] The present invention relates to an intelligent analysis technology, specifically, to a video summary generation method based on spatio-temporal reorganization of activity events, and belongs to the technical field of computer artificial intelligence, digital video image processing, and video monitoring or video retrieval. [0002] technical background [0003] In the field of social public security, video surveillance systems have become an important technical means to maintain public order and strengthen social management. Thousands of surveillance cameras are set up in the streets and alleys of the city, and on different occasions in various enterprises and undertakings to perform their missions of non-stop video recording. These massive video files are characterized by a large amount of stored data and a long time. It takes a lot of manpower, material resources and time to find relevant clues through video recording, and the efficiency is extremely low. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30
CPCG06F16/739G06F16/786
Inventor 马华东李文生张海涛魏汪洋杨军杰高一鸿黄灏赵晓萌
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products