Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Skeleton action recognition method based on graph convolution

An action recognition and skeleton technology, applied in the field of action recognition, can solve the problem that the local/global information and the corresponding relationship between specific actions and human body parts cannot be well considered, and achieve the effect of improving the accuracy.

Inactive Publication Date: 2021-05-04
NANJING UNIV OF SCI & TECH
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these existing skeleton-based action recognition methods usually use joint position or sequence information to represent skeleton-based human actions, which cannot well consider local / global information and the correspondence between specific actions and human body parts.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Skeleton action recognition method based on graph convolution
  • Skeleton action recognition method based on graph convolution
  • Skeleton action recognition method based on graph convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0067] In order to verify the effectiveness of the scheme of the present invention, a simulation experiment was carried out on the public skeleton recognition task dataset NTU RGB+D based on the Pytorch deep learning platform. During the experiment, the method of the present invention follows two evaluation protocols of cross view and cross subject to determine training and test data, and then conducts training and testing of the deep graph convolutional network. When training the network, the training data is input to the deep graph convolutional network for forward propagation, and the classification probability of each sample for each action category is obtained, and then backpropagation is performed based on the cross-entropy loss to adjust the network parameters. After the training is completed, class prediction is performed on the sample to be tested based on the network, that is, the test sample is input into the trained deep image convolutional network, and the classifi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a skeleton action recognition method based on graph convolution. A basic unit of the skeleton action recognition method is a space-time graph convolution module. The space-time diagram convolution module comprises the following steps: acquiring a skeleton video, constructing a skeleton graph based on each frame of skeleton video, defining different human body part combinations according to the skeleton diagrams, constructing a joint point relation graph for each human body part combination, and further constructing a multi-dimensional relation interaction graph which comprises a component combination interaction dimension and a joint point interaction dimension; carrying out graph convolution on the multi-dimensional interaction graph on the joint point interaction dimension and the component combination interaction dimension; and then sending spatial features obtained by convolution of the two graphs to a local convolution network of time slices to obtain time dynamic features. A plurality of space-time graph convolution modules are stacked in a network model to construct a neural network, and a softmax classifier is used for classification.

Description

technical field [0001] The invention belongs to action recognition technology, in particular to a skeleton action recognition method based on graph convolution. Background technique [0002] Human action recognition is a hot research direction in the field of computer vision. Its main purpose is to correctly classify human actions in videos. This technology can be applied to areas such as intelligent video surveillance, natural human-computer interaction, motion video analysis, and unmanned driving. With the development of hardware devices, multi-modal human motion data can be easily collected, including RBG, depth, infrared data, etc. Skeleton videos obtained from depth data are robust to changes in appearance, lighting, and surrounding environment, and it is of great significance to use them as input data to perform motion recognition. [0003] Deep learning is an important method for skeletal action recognition. Yan et al. proposed a space-temporal graph convolutional ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06N3/045G06F18/2411
Inventor 崔振刘蓉许春燕张桐杨健
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products