Human behavior identification method based on 3D deep convolutional network

A deep convolution, convolutional network technology, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve the problems of lack of behavioral information, inability to spatial scale and duration video processing, etc., to improve robustness, The effect of increasing the scale of video training data and improving the integrity

Active Publication Date: 2017-12-22
CHENGDU KOALA URAN TECH CO LTD
View PDF3 Cites 94 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] To sum up, the problems existing in the existing technology are: the existing 3-dimensional convolutional network exists: the network can only extract the sub-motion state; every small segment of the video belongs to the same behavior category; the existing behavior recognition network On

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human behavior identification method based on 3D deep convolutional network
  • Human behavior identification method based on 3D deep convolutional network
  • Human behavior identification method based on 3D deep convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0037] For action recognition in video, the traditional method turns this problem into a multi-classification problem, and proposes different video feature extraction methods. However, traditional methods extract based on low-level information, such as from visual texture information or motion estimation in videos. Since the extracted information is single, it cannot represent the video content well, and the optimized classifier is not optimal. As a technology in deep learning, convolutional neural network integrates feature learning and classifier learning as a whole, and is successfully applied to behavior recognition in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of computer vision video motion identification, and discloses a human behavior identification method based on a 3D deep convolutional network. The human behavior identification method comprises the steps of: firstly, dividing a video into a series of consecutive video segments; then, inputting the consecutive video segments into a 3D neural network formed by a convolutional computation layer and a space-time pyramid pooling layer to obtain features of the consecutive video segments; and then calculating global video features by means of a long and short memory model, and regarding the global video features as a behavior pattern. The human behavior identification method has obvious advantages, can perform feature extraction on video segments of arbitrary resolution and time length by improving a standard 3D convolutional network C3D and introducing multistage pooling, improves the great robustness of the model to behavior change, is conductive to increasing video training data scale while maintaining video quality, and improves the integrity of behavior information through carrying out correlation information embedding according to motion sub-states.

Description

technical field [0001] The invention belongs to the field of computer vision video recognition, in particular to a method for human behavior recognition based on a 3D deep convolutional network. Background technique [0002] In the field of computer vision, the research on action recognition has gone through more than 10 years. As an important part of pattern recognition, feature engineering has always been dominant in the field of behavior recognition. Before deep learning, scientists Evan Laptev and Cordelia Schmid of the French computer vision institution Inria made the most outstanding contributions to the learning of behavioral features. Similar to the ILSVRC Image Recognition Challenge, the action recognition-based challenge THUMOS continues to refresh recognition records every year. The behavioral feature calculation method introduced by Inria has always been among the best. Especially in 2013, Dr. WangHeng of Inria proposed a trajectory-based behavior feature calc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06N3/045G06F18/24G06F18/214
Inventor 高联丽宋井宽王轩瀚邵杰申洪宇
Owner CHENGDU KOALA URAN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products