Human body behavior recognition method based on RGB video and skeleton sequence

A recognition method and video technology, applied in the field of computer vision and pattern recognition, can solve problems such as overfitting, high dimensionality of global descriptors, and affecting the performance of human behavior recognition, achieving good performance and reducing the number of parameters

Active Publication Date: 2020-11-20
NORTHWESTERN POLYTECHNICAL UNIV
View PDF8 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The following problems exist in the feature fusion method: 1. The gap between different feature spaces
2. The dimensionality of the fused global descriptor is very high, which will lead to more parameters for classification, which is easy to cause overfitting
These problems will seriously affect the performance of human behavior recognition.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body behavior recognition method based on RGB video and skeleton sequence
  • Human body behavior recognition method based on RGB video and skeleton sequence
  • Human body behavior recognition method based on RGB video and skeleton sequence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

[0057] The present invention proposes a two-flow network structure, including two modules, a local decision block and a decision fusion block, called LD-Net. One stream of LD-Net is the feature stream, and the multi-fiber network (Multi-Fiber Networks, MF-Net) is selected to extract the spatio-temporal features of the video clip. Because MF-Net is a multi-fiber structure network, it can effectively reduce the parameter amount of the three-dimensional network and avoid overfitting. MF-Net network framework such as image 3 shown. The other stream is the attention stream, which uses the corresponding positions of human skeleton points as the attention area. Because the bone point information reflects the posture characteristics of the human body, and at the same time greatly eliminates useless information about the target. For the extracted key region features,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a human body behavior recognition method based on an RGB video and a skeleton sequence, and belongs to the technical field of computer vision and pattern recognition. The method comprises the following steps: 1, carrying out the feature extraction of an inputted video segment through a feature stream, and obtaining a space-time feature map; step 2, generating a skeleton region heat map by the aid of the attention stream; 3, extracting the spatial and temporal features of the bone region through the binariar; step 4, generating a local decision result by using the localdecision block; and a fifth step of fusing the local decision results by using the decision fusion block to obtain a global decision result. According to the invention, two plug-and-play modules, i.e., a Loal decision block and a Decision block, are used for realizing decision fusion; and the Loal declusion block respectively performs decision making on the spatial and temporal features of each key area, and the Decision lusion block fuses all decision making results to obtain a final decision making result. According to the method, the accuracy of behavior recognition is effectively improvedon Pen Action and NTU RGB + D data sets.

Description

technical field [0001] The invention relates to the technical field of computer vision and pattern recognition, in particular to a human behavior recognition method based on RGB video and skeleton sequences. Background technique [0002] Human action recognition, as a basic problem in computer vision, has now attracted widespread attention in the industry. With the continuous development of computer intelligence technology, human action recognition has broad application prospects in future life. For example: intelligent monitoring, human-computer interaction somatosensory games, video retrieval, etc. Human action recognition in videos suffers from similar problems to object recognition in static images, both tasks must deal with significant intra-class variation, background clutter, and occlusions. However, videos have an additional temporal cue than images. Obtaining video time clues is a major difficulty. [0003] There are two main methods for Convolutional Neural Net...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V10/56G06N3/045G06F18/253
Inventor 曹聪琦李嘉康李亚娟张艳宁郗润平
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products