Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video action detection method based on central point trajectory prediction

A motion detection and trajectory prediction technology, applied in neural learning methods, instruments, biological neural network models, etc., to achieve strong scalability and portability, good robustness and efficiency.

Active Publication Date: 2020-06-09
NANJING UNIV
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

As the video time increases, the number of anchor boxes appearing in a video will increase sharply, which brings great challenges to the training and testing of neural networks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video action detection method based on central point trajectory prediction
  • Video action detection method based on central point trajectory prediction
  • Video action detection method based on central point trajectory prediction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] Inspired by the recent anchor-free object detectors such as CornerNet, CenterNet, FOCS, etc., this invention re-examines the modeling idea of ​​spatio-temporal action detection from another perspective. Intuitively, motion is a natural phenomenon in video, which more essentially describes human behavior, and spatio-temporal action detection can be simplified to the detection of motion trajectories. On the basis of this analysis, the present invention proposes a new idea of ​​motion modeling to complete the task of spatio-temporal motion detection by considering each motion instance as the movement trajectory of the central point of the motion initiator. Specifically, a set of motion sequences is represented by the center point of the action in the middle frame and the motion vectors of the action center points of other frames relative to it. In order to determine the spatial position of the action instance, the present invention directly regresses the size of the action...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video action detection method based on central point trajectory prediction. In video spatio-temporal motion detection, each action instance is regarded as a movement track ofthe central point of the action initiator; wherein the trajectory is represented by motion vectors of the central point of the intermediate frame action and the central points of other frame actionsrelative to the central point of the intermediate frame; firstly, features are extracted from a video frame sequence for obtaining central point position prediction and action category prediction of an intermediate frame, then obtaining a motion trail from the central point of the intermediate frame to central points of other frames, and finally generating a detection box of each frame central point to obtain position positioning, thereby obtaining an action category and a positioning result, and completing a space-time detection task on a video clip. According to the anchor-frame-free video action detection method provided by the invention, time-space positioning and classification tasks of actions are completed in the video along a time sequence, and compared with video action detectionwith an anchor frame in the prior art, the anchor-frame-free video action detection method is simpler and more efficient, embodies robustness and high efficiency, and has very strong expansibility andportability at the same time.

Description

technical field [0001] The invention belongs to the technical field of computer software, relates to a spatio-temporal motion detection technology, in particular to a video motion detection method based on center point track prediction. Background technique [0002] Spatio-temporal action detection is an important research task in the field of computer vision. Its purpose is to classify action instances in videos and locate them in space and time. Spatio-temporal action detection has broad application prospects in real-world scenarios, such as video surveillance and group action detection. The current commonly used method is to use the motion detector independently on each frame to complete frame-by-frame detection, and then use dynamic programming or target tracking to connect the detection results of single frames in time sequence. These methods cannot effectively utilize information in the temporal dimension when performing single-frame detection, so they do not perform ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06N3/048G06N3/045
Inventor 王利民李奕萱王子旭武港山
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products