Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A Behavior Recognition Method Based on Sparse Spatiotemporal Features

A technology of spatiotemporal features and recognition methods, applied in character and pattern recognition, instruments, calculations, etc., can solve problems such as the inability to guarantee optimal solutions, and achieve the effect of improving behavior recognition rate, performance, and performance.

Active Publication Date: 2019-03-15
SUZHOU UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] 3. Computational complexity
Moreover, the deep learning model cannot guarantee to obtain the global optimal solution in the continuous iterative optimization, which requires further exploration in the future.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Behavior Recognition Method Based on Sparse Spatiotemporal Features
  • A Behavior Recognition Method Based on Sparse Spatiotemporal Features
  • A Behavior Recognition Method Based on Sparse Spatiotemporal Features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] Embodiment one: see figure 1 As shown, a behavior recognition method based on sparse spatio-temporal features includes the following steps:

[0031] Step 1. For the input video, use spatio-temporal Gabor to convolve with the original input video to construct a scale space;

[0032] Step 2. Use the expressions of different scales as the values ​​of different channels of the spatio-temporal deep belief network, and jointly learn multi-scale features;

[0033] Step 3: Identify and classify behavioral features.

[0034] In the first step, considering the complexity of model training, from the representations of 7 different scales, according to the loss of information between representations of different scales, based on the entropy, 3 scales with the smallest loss are selected as the multi-scale representation of the input video, Input the deep model for multi-scale feature learning.

[0035] In this embodiment, the Gabor function is used to fit the receptive field respo...

Embodiment 2

[0070] Embodiment two: the behavior database used in this embodiment is KTH (Kungliga Tekniska högskolan, Royal Institute of Technology, Sweden), including six types of behaviors: boxing (boxing), clapping (handclapping), waving (handwaving), jogging (jogging), running (running) and walking (walking), each behavior was repeated multiple times by 25 actors in four different environments. Nine actors in the dataset (actors 2, 3, 5, 6, 7, 8, 9, 10 and 22) form the test set, and the remaining 16 actors are equally divided into training and validation sets. Experimental hardware environment: Linux, Intel(R) Xeon(R) CPU E5-2620 v2@2.1GHz, 62.9G memory, 1T hard disk. The code running environment is: MATLAB 2013a.

[0071] see image 3 As shown, the motion information of the boxing behavior on KTH at different scales is given, and each column corresponds to a different frame in the video. It can be seen from the figure that with the scale (here used Representation) keeps getting b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a behavior recognition method based on sparse spatio-temporal features, comprising the following steps: step 1, using spatio-temporal Gabor and the original input video to convolve the scale space for the input video; step 2, using expressions of different scales as spatio-temporal depth Believe in the values ​​of different channels of the network, and jointly learn multi-scale features; Step 3, identify and classify behavioral features. The present invention jointly learns multi-scale features through the construction of scale space and input depth network, improves the performance of behavior recognition, and aims at the information loss problem of pooling operation, introduces the idea of ​​spatial pyramid, multi-level expansion of pooling output, and combines sparse Encoding performs the fusion of pyramid multi-level features, which reduces the feature dimension of the output of the pooling layer, further improves the performance of the original network, and improves the behavior recognition rate.

Description

technical field [0001] The invention relates to a behavior recognition method, in particular to a behavior recognition method based on sparse spatiotemporal features, which can automatically acquire human behavior characteristics in videos for recognition of human behavior. Background technique [0002] Human behavior recognition is to determine the behavior pattern of the human body by analyzing the correlation and visual appearance characteristics of the human body image in the video sequence. The process of behavior recognition mainly includes two parts: motion information extraction and behavior recognition. When the model is used without errors, the extracted behavior characteristics determine the upper limit of the entire model's capability. The recognition or prediction of the model is only for better close to this upper limit. [0003] Automatically obtaining information related to human behavior has become an urgent problem to be solved in many fields. In the fiel...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/23G06V40/103G06F18/2411
Inventor 龚声蓉王露刘纯平王朝晖朱桂墘葛瑞
Owner SUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products