Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video motion identification method based on time domain segmentation network

A time domain segmentation and action recognition technology, applied in the field of action recognition, can solve the problems of video loss of important information, limited data resources, and limited video duration.

Inactive Publication Date: 2017-12-15
SHENZHEN WEITESHI TECH
View PDF0 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, training deep convolutional neural networks in traditional methods requires large training samples, but the data resources in this area are limited, and its limited storage space severely limits the duration of the video, which will cause the video to lose important information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video motion identification method based on time domain segmentation network
  • Video motion identification method based on time domain segmentation network
  • Video motion identification method based on time domain segmentation network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0057] figure 1 It is a system frame diagram of a video action recognition method based on a time domain segmentation network in the present invention. It mainly includes temporal segmentation network (TSN) based on segmented sampling, aggregation function and analysis, input and training strategy of temporal segmentation network and action recognition in uncropped video.

[0058] Aggregation functions and analysis, the consensus (aggregation) function is an important part of the TSN framework; five types of aggregation functions are proposed: max pooling, average pooling, top Pooling, weighted averaging, and attention weights.

[0059] Max pooling, in this aggregate function, assig...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a video motion identification method based on a time domain segmentation network. The main content of the method comprises: time domain segmentation network (TSN) based on segmented sampling, clustering function and analysis, time domain segmentation network input and training strategy and motion identification in non-cutting video. The process of the method comprises: dividing video into equal continuous time, randomly extracting one segment from corresponding segments, each segment in a sequence generating motion-level segment-stage prediction, designing a common function, aggregating the segment-level prediction into a video-level score, and in the training process, defining an optimal target in the video-level prediction, and performing optimization through an iteration updating model parameters. The sampling and aggregation modules based on segmentation are employed to establish a long-term time structure to use the whole motion video to effectively learn a motion model, store long-term video and improve the sensitivity and accuracy for the detection and identification of the motion.

Description

technical field [0001] The invention relates to the field of action recognition, in particular to a video action recognition method based on a time-domain segmentation network. Background technique [0002] With the rapid development of science and technology and the progress of society, people use video acquisition technology in all aspects of their daily life. However, after people have acquired a large amount of video data, they often need to artificially watch, identify and mark actions in the video. Therefore, video action recognition technology has attracted more and more people's attention, and its application scope has become wider and wider, such as identifying and analyzing the behavior of suspicious people in surveillance videos of vending machines, ATM machines, shopping malls, stations and other public places, basketball games, etc. Shooting action analysis, dance video analysis for practice, identification and detection of dangerous actions of drivers in road d...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04G06N3/08H04N21/44H04N21/4402
CPCH04N21/44008H04N21/440245G06N3/084G06V20/41G06N3/045
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products