Neural network-based movement recognition method

A neural network and human action recognition technology, applied in neural learning methods, biological neural network models, character and pattern recognition, etc., can solve the problem of less application of convolutional neural networks, and achieve the effect of avoiding negative effects

Inactive Publication Date: 2017-07-25
TIANJIN UNIV
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the field of video recognition, the applicat

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network-based movement recognition method
  • Neural network-based movement recognition method
  • Neural network-based movement recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] The embodiment of the present invention proposes a neural network-based action recognition method, see figure 1 , the action recognition method includes the following steps:

[0029] 101: Train N mutually independent 3D convolutional neural networks based on the video database, and use them as video feature extractors;

[0030] 102: According to the video feature extractor, train a multi-instance learning classifier;

[0031] 103: Input the video to be recognized, extract the features of the video through the trained network, and classify the action through the classifier.

[0032] Wherein, training N mutually independent 3D convolutional neural networks based on the video database in step 101 is used as a video feature extractor specifically as follows:

[0033] Divide each video in the video library into several video clips with a frame length of Fi, and each video clip is used as a training sample for network i to train a 3D convolutional neural network. N independ...

Embodiment 2

[0042] The following combined with specific examples, Figure 2-Figure 4 The scheme in Example 1 is further introduced, see the following description for details:

[0043] 201: Establish a video database, and train N mutually independent 3D convolutional neural networks based on the video database, and use it as a video feature extractor, that is, C3D features;

[0044] Among them, the learning of C3D features is carried out on 3D ConvNets (3D convolutional neural network), and its network structure diagram is as follows figure 2 As shown, all convolution filter sizes are 3*3*3, and the space-time step size is 1. Except for Pool1(1*2*2), all pooling layers have a size of 2*2*2 and a step size of 1. Finally, 4096-dimensional outputs are obtained in the fully connected layers fc6 and fc7, respectively.

[0045] Among them, the video feature extractor needs to train N mutually independent 3D ConvNets, and the training process of each network is the same, see image 3 , takin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a neural network-based human body movement recognition method. The method includes the following steps of based on a video database, training N mutually independent 3D convolutional neural networks as a video feature extractor; according to the video feature extractor, training a multi-instance learning classifier; inputting a to-be-recognized video, extracting video features through the well trained network, and classifying movements by the classifier. According to the technical scheme of the invention, the influence of a large amount of noise features on a classification result is avoided. Meanwhile, the negative effect of a fixed sample length on a movement recognition result is also avoided.

Description

technical field [0001] The invention relates to the field of human action recognition, in particular to an action recognition method based on a neural network. Background technique [0002] With the development of the mobile Internet, the carrier of information has gradually expanded from text to audio, image, video and other forms. In recent years, the amount of video data has grown explosively, and the application fields have become more diverse, involving various fields such as security, surveillance, and entertainment. [1] . Faced with such a massive amount of data, traditional manual processing can no longer meet people's needs. Therefore, using the computer's powerful storage and computing capabilities to realize the recognition and understanding of video information has important academic value and broad application prospects. [0003] In fact, in the field of computer vision, research on video has been carried out for decades, and research topics include action re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/08
CPCG06N3/08G06V20/42G06F18/24
Inventor 苏育挺安阳聂为之
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products