Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method

A convolutional neural network, RGB-D technology, applied in the field of RGB-D character behavior recognition based on configurable convolutional neural network, can solve the problem of low accuracy, difficulty in expressing sub-action time changes, and difficult character recognition and other issues, to achieve the effect of high accuracy

Active Publication Date: 2014-12-17
SYSU CMU SHUNDE INT JOINT RES INST +1
View PDF4 Cites 90 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the different poses and perspectives of individual characters, it is usually difficult to accurately extract the motion information of characters as features.
At the same time, the mechanical noise of the depth camera itself is very serious, making artificial design features very difficult
[0007] (2) Character behavior changes too much in the time domain
Since hand-designed features are difficult to express the motion information in RGB-D video data, and fixed-length time blocks are difficult to express the temporal changes of sub-actions, the accuracy rate is not high.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method
  • Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method
  • Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The present invention will be further described below in conjunction with the accompanying drawings, but the embodiments of the present invention are not limited thereto.

[0031] 1. Structured deep model

[0032] First, the structured deep model and the introduced hidden variables are introduced in detail.

[0033] 1.1 Deep Convolutional Neural Network

[0034] In order to model complex character behaviors, the depth model in this embodiment is as follows image 3 shown. It consists of M subnetworks and two fully connected layers. Among them, the outputs of M subnetworks are concatenated into a long vector, and then connected with two fully connected layers. ( image 3 where M is 3, and each sub-network is represented by a different pattern) Each sub-network processes its corresponding video segment, which is related to a sub-behavior decomposed from the complex behavior. Each sub-network is sequentially composed of a 3D convolutional layer, a downsampling layer,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method. According to the method, a (configurable) deeply convolutional neural network is constructed based on a dynamic adjustment structure. The identification method can be used for directly processing RGB-D video data, and can perform dynamic adjustment on a network structure according to the change of the figure behavior in time domain, thereby effectively and automatically extracting spatial-temporal features of complicated figure behaviors, and finally greatly improving the accuracy rate of figure behavior identification.

Description

technical field [0001] The present invention relates to the field of character behavior recognition, and more specifically, to a RGB-D character behavior recognition method based on a configurable convolutional neural network. Background technique [0002] Human behavior recognition is an important field of computer vision research. Its applications include intelligent monitoring, patient monitoring, and some systems involving human-computer interaction. The goal of human behavior recognition is to automatically analyze and identify the ongoing human activities in a video from an unknown video (for example, a piece of image frame). Simply put, if a video is segmented to contain only a single human action, the goal of the system is to correctly classify the video into the human action category to which it belongs. More generally, character behavior recognition hopes to be able to continuously identify the ongoing character activities in the video, and automatically mark the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/02G06K9/62
Inventor 林倞王可泽李亚龙王小龙
Owner SYSU CMU SHUNDE INT JOINT RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products