Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human Action Recognition Method Based on Convolutional Neural Network Feature Coding

A convolutional neural network and human action recognition technology, applied in the field of image processing, can solve problems such as low accuracy, large amount of calculation, and slow speed, so as to improve the accuracy of recognition, stabilize vector features, and reduce the amount of calculation Effect

Active Publication Date: 2019-10-11
XIDIAN UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The shortcomings of this method are that the calculation is large, resulting in slow speed, poor real-time performance, and the problem of trajectory drift.
The disadvantage of this method is that the time complexity is high, and it is easily affected by occlusion and human body differences, so the accuracy is not high, and it is suitable for the recognition of simple actions.
The disadvantage of this method is that the process is complex and is easily affected by occlusion and human body differences.
The existing human action recognition methods have high time complexity, large amount of calculation, poor real-time performance, and are easily affected by occlusion, light intensity changes and human body differences.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human Action Recognition Method Based on Convolutional Neural Network Feature Coding
  • Human Action Recognition Method Based on Convolutional Neural Network Feature Coding
  • Human Action Recognition Method Based on Convolutional Neural Network Feature Coding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039]At present, human action recognition has a wide range of application values ​​in life. In terms of scientific research, there are also many studies on human action recognition. Existing human action recognition methods mainly include methods based on template matching, neural networks, and spatiotemporal features. The above method has high time complexity, large amount of calculation, poor real-time performance, easy to be blocked, requires a large memory, complex to implement, low recognition rate, and is easily affected by environmental conditions in the realization of human action recognition methods, which leads to this problem. When dealing with a large amount of complex background data, such methods reduce the accuracy of human action recognition due to their weak robustness. In view of this current situation, the present invention has carried out innovation and research, and proposed a human action recognition method based on convolutional neural network feature ...

Embodiment 2

[0059] The human action recognition method based on convolutional neural network feature coding is the same as in embodiment 1. In step (5) of the present invention, the convolution features of the spatial direction video and the convolution features of the action direction optical flow graph are respectively subjected to local feature accumulation coding to obtain local features. Feature accumulation descriptor, including the following steps:

[0060] (5a) Accumulate and superimpose the pixel values ​​at the same position in the 512 6×6 pixel size convolution feature maps obtained from each image in the human action video image in the spatial direction, and 36 512-dimensional local feature accumulation descriptors can be obtained , the local feature accumulation descriptor of a video can be expressed as n × (36 × 512), where n represents the number of frames of the video;

[0061] (5b) 512 6×6-pixel convolutional feature maps obtained for each optical flow map in the action d...

Embodiment 3

[0064] The human action recognition method based on convolutional neural network feature coding is the same as that in Embodiment 1-2. In the step (6) of the present invention, principal component analysis (PCA) is used to reduce the dimension and whiten the local feature accumulation descriptors of the spatial direction and the action direction respectively. Proceed as follows:

[0065] (6a) Use principal component analysis PCA to perform dimensionality reduction and whitening processing on the local feature accumulation descriptor in the spatial direction;

[0066] (6a1) Randomly extract 10,000 local feature cumulative descriptors from the encoded local feature cumulative descriptors, expressed as {x 1 ,...,x i ,...,x m}, as the input data for PCA processing, where i∈[1,m], m is the number of cumulative descriptors of local features;

[0067] (6a2) Calculate the mean value of each local feature cumulative descriptor according to the following formula

[0068]

[006...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention proposes a human action recognition method based on convolutional neural network feature coding, which mainly solves the problems of complex calculation and low accuracy in the prior art. The implementation plan is: use TV-L1 to obtain the video optical flow graph; respectively perform convolutional neural network, local feature accumulation encoding, dimensionality reduction and whitening processing, and VLAD vector processing from the video space direction and the optical flow action direction to obtain the spatial direction VLAD vector and Action direction VLAD vector; combine the two direction information of video space and optical flow action to obtain human action classification data, and then perform classification processing. The invention performs local feature accumulation encoding on the convolution features, which improves the recognition rate and reduces the amount of calculation when processing complex background data, and the features obtained by fusing video and optical flow VLAD vectors have higher robustness to environmental changes , can be used in areas such as residential areas, shopping malls, and confidential places to detect and recognize human movements in surveillance videos.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to human action recognition based on deep learning, specifically a method for human action recognition based on convolutional neural network feature coding, which can be used in areas such as residential areas, hotels, shopping malls, and confidential places. Human motion detection and recognition in surveillance video. Background technique [0002] With the rapid development of science and technology and the continuous improvement of people's living standards, people are paying more and more attention to safety issues in life. Nowadays, video surveillance equipment is becoming more and more popular, and video surveillance equipment is installed in many places such as residential quarters, hotels, parking lots, shopping malls, intersections, companies, and confidential places. As the scale of video surveillance equipment continues to expand, the demand for more intel...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/20G06V20/42G06F18/2411
Inventor 韩红程素华何兰衣亚男李林糠
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products