Viewpoint adjustment-based graph convolution cycle network skeleton action recognition method and system

A technology of action recognition and recurrent neural network, which is applied in the field of skeletal action recognition of graph convolutional recurrent network based on viewpoint adjustment, can solve the problems of different observation angles, different recognition results, and low accuracy of action recognition, achieving broad application prospects, The effect of improving accuracy

Active Publication Date: 2020-06-26
SHANDONG UNIV
View PDF6 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The inventors of the present disclosure have discovered that neural network models are mostly used in the prior art to identify skeletal movements, but due to different shooting angles or the movement of the subject’s body, the observation angles are different, and the skeleton of the same posture is captured from different angles Indicates that the obtained recognition results will also be very different, resulting in a lower accuracy of final action recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Viewpoint adjustment-based graph convolution cycle network skeleton action recognition method and system
  • Viewpoint adjustment-based graph convolution cycle network skeleton action recognition method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] Such as figure 1 As shown, Embodiment 1 of the present disclosure provides a skeletal action recognition method based on a viewpoint adjustment based on a graph convolutional loop network, including the following steps:

[0030] (1) Preprocess the acquired action data and use the NTU-RGB+D dataset as the action recognition dataset. This dataset is the largest action data and provides 3D bone coordinates, including 60 different actions, including The two benchmarks of cross-perspective and cross-subject;

[0031] Specifically:

[0032] (1-1) Obtain the original body data from the skeleton sequence; obtain the original body data from the skeleton sequence, each body data is a dictionary, including keywords such as the original 3D joint, the original 2D color position, and the frame index of the subject;

[0033] (1-2) Obtain denoising data from the original skeleton sequence; obtain denoising data (joint positions and color positions) from the original skeleton sequence...

Embodiment 2

[0055] Embodiment 2 of the present disclosure provides a skeletal action recognition system based on viewpoint adjustment based on graph convolutional loop network, including:

[0056] The preprocessing module is configured to: preprocess the acquired action data;

[0057] The bone data prediction module is configured to: use the trained graph convolutional cyclic neural network and use the preprocessed data as input to obtain the spatiotemporal information of the bone data;

[0058] The classification module is configured to: use the Softmax function and take the obtained spatio-temporal information as input to obtain a classification result of the skeletal motion.

[0059] The specific identification method is the same as that in Embodiment 1, and will not be repeated here.

Embodiment 3

[0061] Embodiment 3 of the present disclosure provides a medium on which a program is stored, and when the program is executed by a processor, the steps in the viewpoint-adjusted graph convolutional loop network skeleton action recognition method based on viewpoint adjustment as described in Embodiment 1 of the present disclosure are implemented. .

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a viewpoint adjustment-based graph convolution loop network skeleton action recognition method and system, relates to the technical field of action recognition, and solves the problem of recognition accuracy reduction caused by different observation visual angles. Utilizing the trained graph convolution recurrent neural network, and taking the preprocessed data as input to obtain spatiotemporal information of the bone data; a Softmax function is adopted, the obtained space-time information serves as input, and a skeletal movement classification result is obtained; the method integrates the advantages of the graph convolution network and the cyclic network, achieves the modeling of the time and space information of the skeleton data, can further improve the accuracy of movement recognition on the basis of an LSTM network movement recognition method, is universal in behavior recognition based on a skeleton data set, and is wide in application prospect.

Description

technical field [0001] The present disclosure relates to the technical field of action recognition, and in particular to a method and system for skeletal action recognition based on a graph convolutional loop network based on viewpoint adjustment. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art. [0003] Human action recognition has always been an important and challenging problem in the field of computer vision. Human action recognition technology is applied in many fields, such as visual surveillance, human-computer interaction, video indexing / retrieval, video summarization, and video understanding, etc. [0004] One of the main challenges of skeleton-based human action recognition is the complex viewpoint changes when capturing human action data. Skeletal representations of the same pose can be quite different if captured from different viewp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V40/10G06V40/20G06N3/047G06N3/045G06F18/2415
Inventor 周风余黄晴晴贺家凯刘美珍尹磊
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products