Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Track and convolutional neural network feature extraction-based behavior identification method

A convolutional neural network and feature extraction technology, applied in the field of behavior recognition based on trajectory and convolutional neural network feature extraction, can solve the problems of large amount of calculation and insufficient feature expression ability, and achieve strong robustness and discrimination Power, improve algorithm efficiency, reduce computational complexity and effect of feature dimension

Active Publication Date: 2017-05-31
XIDIAN UNIV
View PDF6 Cites 57 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The purpose of the present invention is to propose a trajectory-based and convolutional neural network feature that has strong feature expression ability, reduces redundant calculations, and can extract abstract convolution trajectory features for the problems of large amount of calculation and insufficient feature expression ability in the prior art. Extracted Behavior Recognition Methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Track and convolutional neural network feature extraction-based behavior identification method
  • Track and convolutional neural network feature extraction-based behavior identification method
  • Track and convolutional neural network feature extraction-based behavior identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0038] For the problem of human behavior recognition, traditional methods generally extract the trajectory points generated during human motion, and combine the trajectory points to extract unsupervised feature descriptors around the space-time domain, such as histogram of oriented gradients (HOG), histogram of optical flow ( HOF), Motion Boundary Histogram (MBH), etc., combined with Fisher transform and principal component analysis to finally classify and identify, but unsupervised feature descriptors generally have problems such as insufficient feature representation ability and large computational complexity.

[0039] In order to avoid the problems existing in the prior art, improve the effectiveness and accuracy of human behavior recognition and reduce redundant calculations, the present invention proposes a behavior recognition method based on trajectory and convolutional neural network stack feature transformation, see figure 1 , including the following steps:

[0040] (...

Embodiment 2

[0057] The behavior recognition method based on trajectory and convolutional neural network feature transformation is the same as embodiment 1,

[0058] In step (2.4), the feature of the convolutional layer using the convolutional neural network to extract trajectory constraints specifically includes the following steps:

[0059](2.4.1) Train the convolutional neural network, extract the video frame and the corresponding class label from the human behavior video as the input of the convolutional neural network (CNN), and extract the convolution feature for each input video frame, where the convolution The structure of the neural network (CNN) is 5 convolutional layers and 3 fully connected layers.

[0060] Different layers of convolutional neural networks can capture different behavior patterns, from low-level edge textures to complex objects and targets, and higher-level neural networks have larger receptive fields to obtain more discriminative features;

[0061] (2.4.2) Obt...

Embodiment 3

[0070] The behavior recognition method based on trajectory and convolutional neural network feature transformation is the same as embodiment 1,

[0071] The maximum interval feature transformation method described in step (3), specifically:

[0072] Sampling the local Fisher vector of each sample in all the labeled sample sets used for training, in a sampling subset {φ i ,y i} i=1,...,N Learning the projection matrix U∈R using the maximum margin feature transformation method p∈2Kd , p<<2Kd, where N represents the number of local Fisher vectors in the sampling subset.

[0073] Using a one-to-many strategy, the multi-category problem of the B-type behavior sample set is transformed into multiple binary classification problems to learn the projection matrix, and the maximum interval is solved in each binary classification problem. The maximum interval constraint is as follows:

[0074] y′ i (wUφ i +b)>1,i=1,...,N

[0075] Among them, y′ i ∈(-1,1) is the class label of the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a track and convolutional neural network feature extraction-based behavior identification method, and mainly solves the problems of computing redundancy and low classification accuracy caused by complex human behavior video contents and sparse features. The method comprises the steps of inputting image video data; down-sampling pixel points in a video frame; deleting uniform region sampling points; extracting a track; extracting convolutional layer features by utilizing a convolutional neural network; extracting track constraint-based convolutional features in combination with the track and the convolutional layer features; extracting stack type local Fisher vector features according to the track constraint-based convolutional features; performing compression transformation on the stack type local Fisher vector features; training a support vector machine model by utilizing final stack type local Fisher vector features; and performing human behavior identification and classification. According to the method, relatively high and stable classification accuracy can be obtained by adopting a method for combining multilevel Fisher vectors with convolutional track feature descriptors; and the method can be widely applied to the fields of man-machine interaction, virtual reality, video monitoring and the like.

Description

technical field [0001] The invention belongs to the technical field of video image processing, and mainly relates to deep learning and feature extraction, in particular to a behavior recognition method based on trajectory and convolutional neural network feature extraction. Used to classify human action videos. Background technique [0002] Human action recognition is widely used in the fields of human-computer intelligent interaction, virtual reality and video surveillance. Although the research on human motion behavior recognition at home and abroad has made important progress in recent years, the high complexity and variability of human motion make the accuracy and efficiency of recognition not fully meet the requirements of related industries. In general, the challenges in human action behavior recognition come from the following two aspects: [0003] 1) Spatial complexity: Different action scenes will be presented under different lighting, viewing angle and background...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/00G06N3/04G06N3/08
CPCG06N3/04G06N3/08G06V40/20G06F18/2411
Inventor 张向荣焦李成惠通李阳阳冯婕白静侯彪马文萍
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products