Training method of human action recognition and recognition method

A technology of human action recognition and training method, applied in the field of video analysis, can solve the problems of no longer applicable, performance degradation, insufficient to capture all the characteristics of human action, etc., and achieve the effect of good recognition results

Inactive Publication Date: 2010-11-24
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF1 Cites 48 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The problem with this method of detecting interest points is that sometimes the detected points are too few to capture all the features of human motion, which leads to a decline in recognition performance.
However, these early studies were limited to human action recognition in restricted scenes, such as specific perspectives, action figures, backgrounds, and lighting. In natural scenes, when the above-mentioned restrictions were removed, the performance of the method dropped sharply or even no longer applicable

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method of human action recognition and recognition method
  • Training method of human action recognition and recognition method
  • Training method of human action recognition and recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] Before describing the present invention in detail, some related concepts in the present invention will be described in a unified manner.

[0053] Spatiotemporal Interest Points (STIPs): A given video sequence is processed by some spatiotemporal interest point detector algorithms (such as those proposed in the aforementioned references 2 and 4), within a certain threshold range, by non-maximum suppression (non -maximal suppression) processing, the local maxima of the response function are defined as spatiotemporal interest points. Spatio-temporal interest points show large changes in time and space dimensions, and are generally described by optical flow histograms or gradient histograms. Due to their locality, they have good rotation, translation and scaling invariance , but no description of the global motion.

[0054] Video words: In the set of spatio-temporal interest point descriptors extracted from all training videos, a subset is randomly selected and clustered us...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a training method of human action recognition, comprising the following steps: extracting space-time interest points from a video file; quantizing all the space-time interest points to corresponding video words according to the feature descriptors contained by the space-time interest points and generating a statistical histogram for the video words; obtaining other video words in the space-time neighborhood of the video words according to the space-time context information in the space-time neighborhood of the video words and forming space-time video phrases by the video words and one of other video words which meets space-time constraint; clustering the space-time contexts in the space-time neighborhood of the video words to obtain context words and forming space-time video word groups by the video words and the context words; selecting the representative space-time video phrase from the space-time video phrases and selecting the representative space-time video word group from the space-time video word groups; and training a classifier by utilizing the result after one or more features in the video words, the representative space-time video phrase and the representative space-time video word group are fused.

Description

technical field [0001] The invention relates to the field of video analysis, in particular to a training method and recognition method for human action recognition. Background technique [0002] In recent years, with the rapid popularization and development of media such as film and television and the Internet, video has become the main tool for carrying information, and the amount of video data is growing explosively, and a large amount of new content will be generated every moment. In the face of massive video data, how to automatically obtain and analyze the information contained in it, and understand the actions, behaviors or events that occurred in it has become an urgent problem to be solved. [0003] Most videos record the activities of people who are the main body of social activities. How to enable computers to "see" the video or "understand" the actions of people in the video has become an important issue in computer vision, image processing, pattern recognition, m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/66
CPCG06V40/20G06V10/62G06V10/462
Inventor 秦磊胡琼黄庆明蒋树强
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products