Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video interaction event analysis method and device base on sequence space-time cube characteristics

A video interaction and cube technology, applied in the field of computer vision, can solve the problem of lack of description of local motion characteristics, achieve the effect of intelligent detection and enhanced description ability

Inactive Publication Date: 2014-07-02
PEKING UNIV
View PDF4 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This analysis method lacks the description of the local motion characteristics of the object, and cannot determine the specific type of a complex event by analyzing the causal relationship between each stage of the complex event.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video interaction event analysis method and device base on sequence space-time cube characteristics
  • Video interaction event analysis method and device base on sequence space-time cube characteristics
  • Video interaction event analysis method and device base on sequence space-time cube characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be described in detail below through specific embodiments and accompanying drawings.

[0040] figure 1 It is a structural diagram of the video interaction event analysis device based on sequence space-time cube features of this embodiment, which includes: a preprocessing module for detecting and tracking objects of interest in surveillance videos; a video sequence division module connected to the preprocessing The processing module is used to adaptively divide the monitoring video into a space-time cube sequence based on the detection and tracking results; the space-time cube feature extraction module is connected to the preprocessing module and the video sequence division module, and is used to extract objects of interest in the monitoring video The visual features of the time-series feature reconstruction module, connected to the space-time cube feature extraction module, used to reconstruct the extracted space-time cube feature into a varia...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a video interaction event analysis method and device base on sequence space-time cube characteristics. The method includes the steps of dividing a monitoring video into a plurality of space-time cube sequences on the basis of a detection tracking result of the monitoring video, extracting object tracks, the appearance and local movement descriptors in each space-time cube, forming characteristic segments through the extracted descriptors, reconstructing the characteristic segments in all the space-time cubes to establish the sequence space-time cube characteristics, and conducting interaction event classification detection through the sequence space-time cube characteristics. The device comprises a preprocessing module, a video sequence dividing module, a space-time cube characteristic extraction module, a space-time characteristic reconstruction module and a sequence characteristic classification module. According to the method and the device, high-level semantic layer description is achieved for content of the monitoring video, length-variable sequence characteristic classification is achieved through a multi-core support vector machine based on a dynamic time alignment kernel function, and therefore intelligent detection is achieved for monitoring video stream interaction events.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and relates to a monitoring video interaction event analysis method, in particular to a monitoring video interaction event analysis method based on sequence space-time cube characteristics, and a device for realizing the method. Background technique [0002] As surveillance cameras are widely used in all aspects of people's lives, surveillance video data is growing explosively. How to intelligently analyze interesting events that occur in it is a challenging problem. Among the events of interest, some are multi-object interaction events, such as fights, robberies, homicides, and car crashes. In order to be able to report to the police in the early stage of the incident and assist in the investigation and evidence collection after the incident, the intelligent analysis method for such incidents is very critical. [0003] At present, the handling of such incidents mainly relies on witnesse...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06T7/00
Inventor 田永鸿房晓宇王耀威黄铁军
Owner PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products