Supercharge Your Innovation With Domain-Expert AI Agents!

Action structure self-attention graph convolutional network for action recognition

A technology using graph volumes and networks, applied in the field of graph convolutional networks, which can solve problems such as low recognition efficiency, limited expressiveness, and difficulty in generalization and/or application.

Active Publication Date: 2021-03-23
HONG KONG APPLIED SCI & TECH RES INST
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, traditional skeletal modeling methods usually rely on hand-crafted features or traversal rules, thus resulting in limited expressive power and difficult generalization and / or application
[0003] There are many problems and difficulties in the existing methods of recognizing human actions through skeletal modeling, such as but not limited to low recognition efficiency, slow recognition speed and / or low recognition accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action structure self-attention graph convolutional network for action recognition
  • Action structure self-attention graph convolutional network for action recognition
  • Action structure self-attention graph convolutional network for action recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The method will now be described with reference to the accompanying drawings, which show specific exemplary embodiments by way of illustration. This method may, however, be embodied in a variety of different forms, and thus it is intended that covered or claimed subject matter be construed as not limited to any exemplary embodiments set forth. The method can be embodied as a method, device, component or system. Accordingly, for example, embodiments may take the form of hardware, software, firmware or any combination thereof.

[0033] Throughout the specification and claims, terms may have subtle meanings implied or implied by the context beyond the explicitly stated meaning. Likewise, the phrases "in one embodiment" or "in some embodiments" are not necessarily referring to the same embodiment herein. The phrases "in another embodiment" or "in other embodiments" as used herein do not necessarily refer to a different embodiment. The phrases "in one embodiment" or "in s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention describes a method, apparatus, and non-transitory computer-readable storage medium that use a graph convolutional network (GCN) to identify human body actions. The method includes obtaining a plurality of articulation point gestures by the apparatus. The apparatus includes a memory storing instructions and a processor in communication with the memory. The method also includes normalizing, by the apparatus, the plurality of articulation point gestures to obtain a plurality of normalized articulation point gestures; extracting, by the apparatus, a plurality of coarse features fromthe plurality of normalized articulation point gestures using an improved spatiotemporal graph convolutional network (ST-GCN); reducing the feature dimensions of the plurality of coarse features through the apparatus to obtain a plurality of dimension-reduced features; optimizing the plurality of dimension-reduced features based on a self-attention model through the apparatus to obtain a pluralityof optimized features; and identifying the human body action through the apparatus according to the plurality of optimized characteristics.

Description

technical field [0001] The present invention relates to a graph convolutional network (graph convolutional network, GCN) for human action recognition, in particular to an improved spatio-temporal graph convolutional network with a self-attention model. Background technique [0002] Human action recognition has been actively developed in recent years due to its important role in video understanding. Generally, human actions can be recognized from multiple modalities such as appearance, depth, optical flow, body, etc. Among these modalities, the dynamic human skeleton often conveys important information and complements the other modalities. However, traditional skeletal modeling methods usually rely on hand-crafted features or traversal rules, thus resulting in limited expressive power and difficult generalization and / or application. [0003] There are many problems and difficulties in the existing methods for recognizing human actions through skeletal modeling, such as but ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/25G06V40/20G06N3/048G06N3/045G06F18/213G06F18/24
Inventor 李海良刘扬李文迪雷志斌
Owner HONG KONG APPLIED SCI & TECH RES INST
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More