Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Bone action recognition method based on learnable PL-GCN and ECLSTM

A PL-GCN and action recognition technology, applied in the field of action recognition, can solve the problems that the spatial features of bone joint points cannot be fully extracted, and the importance is not considered.

Pending Publication Date: 2022-05-24
CHONGQING UNIV OF POSTS & TELECOMM
View PDF1 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the above method uses the long-term memory function of the LSTM network to capture the temporal features of the network, it does not take into account the importance of motion information contained in different image frames. For this, this patent proposes a feature-enhanced ECLSTM network that can capture motion Significant keyframes to efficiently extract temporal features
In addition, the above method directly inputs the bone data into the original information module, and cannot fully extract the spatial features of the bone joint points, because the features of the joint points are closely related to the features of other joint points in the neighborhood, and the connection graph between the bone joint points is a kind of Graph topology

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Bone action recognition method based on learnable PL-GCN and ECLSTM
  • Bone action recognition method based on learnable PL-GCN and ECLSTM
  • Bone action recognition method based on learnable PL-GCN and ECLSTM

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0086] The technical solutions in the embodiments of the present invention will be described clearly and in detail below with reference to the accompanying drawings in the embodiments of the present invention. The described embodiments are only some of the embodiments of the invention.

[0087] The technical scheme that the present invention solves the above-mentioned technical problems is:

[0088] This patent proposes a spatiotemporal graph convolutional skeleton action recognition method with learning ability. The overall framework of the model is as follows figure 1 shown.

[0089] First, in order to fully obtain the spatial and temporal features of action video samples, the present invention adopts a two-stream network framework, and proposes a graph convolutional network with self-learning ability to extract the spatial feature relationship between skeleton joint points. To fully extract , we consider stacking 9 layers of learnable graph convolutional network modules t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a skeleton action recognition method based on learnable PL-GCN and ECLSTM, and relates to the field of action recognition. The problems that in the skeleton action recognition process, the feature capture capacity of key frames and significant motion joints is limited, and the similar action classification capacity is weak can be solved. The method comprises the steps that a learnable graph convolutional network (PL-GCN) is provided for the problem that similar action recognition is prone to confusion, and the learnable graph convolutional network (PL-GCN) is used for improving the physical structure of a model; for the problem of weak key frame capture capability, a feature enhanced long and short time memory network (ECLSTM) is provided for enhancing time sequence features; building a skeleton graph by utilizing a graph topological structure of the skeleton sequence data; fusing the spatial features from the image after convolution and the time sequence features extracted by the ECLSTM network; and carrying out average pooling and convolution on the fused features, and then carrying out final feature classification. The method provided by the invention is superior to some current methods in action recognition progress, algorithm complexity and feature extraction capability.

Description

technical field [0001] The invention belongs to the field of action recognition, in particular to a skeleton action recognition method. Background technique [0002] In recent years, skeleton-based human action recognition has developed rapidly in video surveillance, human-computer interaction, and virtual reality, and has attracted widespread attention in the field of computer vision. Skeleton data is a topological representation of human joints and bones. Compared with many methods based on RGB and optical flow, it has a small amount of calculation and strong robustness in the face of complex backgrounds. Speed ​​changes, etc. With the advent of Microsoft's Kinect depth camera and some excellent pose estimation algorithms, more and more researchers have started skeleton-based action recognition. [0003] Early skeletal action recognition usually encoded all body joint positions in each frame as feature vectors or pseudo images for pattern learning, and then fed this imag...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06V40/20G06V40/10G06V20/40G06T7/73G06N3/08G06N3/04G06K9/62G06V10/764G06V10/82
CPCG06N3/084G06T7/73G06T2207/30008G06T2207/30196G06N3/044G06N3/045G06F18/24
Inventor 蔡林沁潘锐方豪度赖廷杰
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products