Motion recognition method based on three-dimensional convolution depth neural network and depth video

A deep neural network and three-dimensional convolution technology, applied in the field of behavior recognition, can solve the problems of high time overhead, difficult real-time performance, slow research progress, etc., and achieve the effect of good recognition performance, good generalization performance, and good recognition effect.

Inactive Publication Date: 2016-12-07
CHONGQING UNIV OF TECH
View PDF4 Cites 74 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the research progress of behavior recognition methods based on artificial features has been slow in recent years, mainly because, first, in order to effectively retain behavior information, the extracted feature dimensions are getting higher and higher, and the calculation overhead is too large, so it is difficult to achieve real-time performance. ; Second, the artificially designed features are adjusted for a specific data set, and it is difficult to generalize to other data sets; Third, the steps of traditional behavior recognition methods are isolated, and the classification results cannot be automatically fed back to the feature extraction and description
In short, the traditional human behavior recognition method based on artificially designed features involves many links, has the disadvantages of high time consumption and difficulty in overall algorithm optimization

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Motion recognition method based on three-dimensional convolution depth neural network and depth video
  • Motion recognition method based on three-dimensional convolution depth neural network and depth video
  • Motion recognition method based on three-dimensional convolution depth neural network and depth video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] This embodiment discloses an action recognition method based on a three-dimensional convolutional deep neural network and deep video, including the following steps:

[0034] (1) Establish a training data set. The training data set used in this embodiment is the MSR-Action3D data set or the UTKinect-Action3D data set.

[0035] (2) Construct a deep neural network model based on three-dimensional convolution. figure 1 The three-dimensional convolution-based deep neural network model designed by the present invention is given. The network has two three-dimensional convolutional layers (ConvolutionLayer), in which the convolution operation considers both space and time dimensions, and the number of feature maps of the two convolutional layers is 32 and 128, respectively. The convolution kernel of the three-dimensional convolutional layer is three-dimensional, and the feature map obtained after convolution is also three-dimensional. Since the video sizes of the two dataset...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a motion recognition method based on a three-dimensional convolution depth neural network and depth video. In the invention, depth video is used as the object of study, a three-dimensional convolution depth neural network is constructed to automatically learn temporal and spatial characteristics of human body behaviors, and a Softmax classifier is used for the classification and recognition of human body behaviors. The proposed method by the invention can effectively extract the potential characteristics of human body behaviors, and can not only obtain good recognition results on an MSR-Action3D data set, but also obtain good recognition results on a UTKinect-Action3D data set.

Description

technical field [0001] The invention relates to the field of action recognition, in particular to an action recognition method based on a three-dimensional convolutional deep neural network and deep video. Background technique [0002] As a popular technology in video analysis, human behavior recognition has gradually begun to be applied in daily life, such as abnormal event detection in automatic surveillance, video retrieval, human-machine interface, etc. Traditional human action recognition includes three steps: feature extraction, feature representation and recognition classification. First, artificial features are extracted from video sequences. Secondly, some techniques such as transformation and clustering are used to construct more discriminative descriptor features from the extracted features. Finally, a classifier is used to classify and identify the descriptor features. The behavior recognition method based on feature extraction has achieved very promising rese...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/084G06V40/20G06V40/10G06N3/045
Inventor 刘智李博冯欣葛永新张凌张杰慧
Owner CHONGQING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products