Behavior recognition method based on sparse spatial-temporal characteristics

A technology of spatio-temporal features and recognition methods, applied in character and pattern recognition, instruments, computer components, etc., can solve problems such as inability to guarantee the optimal solution, and achieve the effect of improving behavior recognition rate, performance and performance

Active Publication Date: 2015-09-23
SUZHOU UNIV
View PDF4 Cites 55 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] 3. Computational complexity
Moreover, the deep learning model cannot guarantee to obtain the global o

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Behavior recognition method based on sparse spatial-temporal characteristics
  • Behavior recognition method based on sparse spatial-temporal characteristics
  • Behavior recognition method based on sparse spatial-temporal characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] Embodiment one: see figure 1 As shown, a behavior recognition method based on sparse spatio-temporal features includes the following steps:

[0031] Step 1. For the input video, use spatio-temporal Gabor to convolve with the original input video to construct a scale space;

[0032] Step 2. Use the expressions of different scales as the values ​​of different channels of the spatio-temporal deep belief network, and jointly learn multi-scale features;

[0033] Step 3: Identify and classify behavioral features.

[0034] In the first step, considering the complexity of model training, from the representations of 7 different scales, according to the loss of information between representations of different scales, based on the entropy, 3 scales with the smallest loss are selected as the multi-scale representation of the input video, Input the deep model for multi-scale feature learning.

[0035] In this embodiment, the Gabor function is used to fit the receptive field respo...

Embodiment 2

[0069] Embodiment two: the behavior database used in this embodiment is KTH (Kungliga Tekniska h?gskolan, Royal Institute of Technology, Sweden), including six types of behaviors: boxing (boxing), clapping (handclapping), waving (handwaving), jogging (jogging) , running (running) and walking (walking), each behavior was repeated multiple times by 25 actors in four different environments. Nine actors in the dataset (actors 2, 3, 5, 6, 7, 8, 9, 10 and 22) form the test set, and the remaining 16 actors are equally divided into training and validation sets. Experimental hardware environment: Linux, Intel(R) Xeon(R) CPU E5-2620 v22.1GHz, 62.9G memory, 1T hard disk. The code running environment is: MATLAB 2013a.

[0070] see image 3 As shown, the motion information of the boxing behavior on KTH at different scales is given, and each column corresponds to a different frame in the video. It can be seen from the figure that with the scale (here used Representation) keeps getting b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a behavior recognition method based on sparse spatial-temporal characteristics. The method comprises the steps as follows: step 1, convolving an input video with an original input video by using space-time Gabor to establish a scale space; step 2, using the expressions of different scales as values of different channels of a space-time depth belief network and associatively learning characteristics of multi-scale; step 3, recognizing and classifying the behavior characteristics. The behavior recognition method of the invention inputs depth network to associatively learn the characteristics of multi-scale by establishing the scale space to improve performance of behavior recognition. The behavior recognition method of the invention introduces the thought of spatial pyramid in terms of information loss problem of a pooling operation and performs multilevel expansion to pooling output, and combines with sparse coding to fuse pyramid multilevel characteristic, thereby improving characteristic dimension output by a pooling layer, further improving the performance of the original network and improving behavior recognition rate.

Description

technical field [0001] The invention relates to a behavior recognition method, in particular to a behavior recognition method based on sparse spatiotemporal features, which can automatically acquire human behavior characteristics in videos for recognition of human behavior. Background technique [0002] Human behavior recognition is to determine the behavior pattern of the human body by analyzing the correlation and visual appearance characteristics of the human body image in the video sequence. The process of behavior recognition mainly includes two parts: motion information extraction and behavior recognition. When the model is used without errors, the extracted behavior characteristics determine the upper limit of the entire model's capability. The recognition or prediction of the model is only for better close to this upper limit. [0003] Automatically obtaining information related to human behavior has become an urgent problem to be solved in many fields. In the fiel...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/23G06V40/103G06F18/2411
Inventor 龚声蓉王露刘纯平王朝晖朱桂墘葛瑞
Owner SUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products