Human skeleton action recognition method

A human skeleton and recognition method technology, applied in the field of recognition graphics, can solve the problems of not being able to better capture spatio-temporal feature information, recognition errors, etc.

Active Publication Date: 2020-07-31
HEBEI UNIV OF TECH
View PDF11 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The technical problem to be solved by the present invention is to provide a recognition method for human skeleton movements, which is a recognition method for human skeleton movements that combines spatio-temporal attention and graph convolutional networks, and fully taps the diversity and complementarity of different feature information , use the attention mechanism to adaptively adjust the weight value of each joint point of the spat

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human skeleton action recognition method
  • Human skeleton action recognition method
  • Human skeleton action recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0100] The specific steps of the action recognition method combining spatiotemporal attention and graph convolution network in this embodiment are as follows:

[0101] The first step is to generate training data for multi-angle skeleton features:

[0102] The training data of the multi-angle skeleton feature includes joint information flow data, bone information flow data and motion information flow data,

[0103] First, for a set of input video sequences of human skeleton actions, construct an undirected connected graph of the human skeleton, where the joint points are the vertices of the graph, and the natural connections between the joint points are the edges of the graph, defining the skeleton graph G={V, E}, where V is a set of n joint points, E is a set of m skeleton edges, and the adjacency matrix A of the skeleton graph is obtained by the following formula (1): k ∈{0,1} n×n ,

[0104]

[0105] Then use the coordinate data of its joint points to obtain the joint s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a human skeleton action recognition method, and relates to a method for identifying graphics. The method is a human skeleton action recognition method combining space-time attention and a graph convolution network. Diversity and complementarity of different feature information are fully mined; the weight value of each joint point of the space structure and the importance ofeach frame of the video sequence are adaptively adjusted by using an attention mechanism; the motion recognition of the human skeleton is carried out by using the graph convolution network, and the defects that in the prior art, space-time feature information cannot be better captured, and errors are likely to occur in the recognition of the difficult motion of the human body are overcome.

Description

technical field [0001] The technical solution of the present invention relates to a method for recognizing graphics, in particular to a method for recognizing human skeleton movements. Background technique [0002] In recent years, with the wide application of video capture sensors and the continuous development of human pose estimation algorithms, and its potential applications in intelligent video surveillance, patient monitoring systems, human-computer interaction and virtual reality, human action recognition has received more and more attention. more attention. Human action recognition based on machine vision is to add action type tags to videos containing human actions. The purpose is to analyze and understand the actions of individuals in the video and the interaction between multiple people. [0003] According to the type of input data, human action recognition is divided into RGB video-based methods and skeleton video-based methods. Compared with RGB images, human ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/23G06F18/2415G06F18/253G06F18/214
Inventor 于明李杰郝小可郭迎春朱叶刘依阎刚
Owner HEBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products