Human body action recognition method based on space-time attention mechanism

A human action and attention technology, applied in the field of computer vision, can solve the problem of not considering the relationship between the local area of ​​interest and the global area features of human action, and achieve the effect of strengthening the representation and improving the recognition efficiency.

Active Publication Date: 2021-02-12
DALIAN UNIV OF TECH
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, the RGB video data set also contains more visual human motion information, so the research based on this type of data set is more challenging.
[0005] (2) Traditional human action recognition methods
Existing methods do not take into account the relationship between the local region of interest part of the human action and the global region features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body action recognition method based on space-time attention mechanism
  • Human body action recognition method based on space-time attention mechanism
  • Human body action recognition method based on space-time attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] In order to make the technical solution and detailed principles of the present invention more clear and specific, the present invention will be further described below with reference to the accompanying drawings and examples.

[0058] This embodiment discloses a human action recognition method based on the spatio-temporal attention mechanism, the general diagram is as follows figure 1 As shown, the schematic diagram of the detailed network structure is shown in image 3 shown. Specific steps are as follows:

[0059] 1. Divide the human action video clips in the data set into 5 clips with 20 frames as the unit, and adjust the video frames to 224*224 pixels uniformly. Randomly select a single frame from each intercepted human action video clip as the input of the spatial network, use the TVL1 optical flow method to extract the video frame data to obtain the optical flow graph in the horizontal and vertical directions, and store it as a JPEG image as the input of the tem...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of computer vision, relates to human body action recognition in a video, is used for positioning and classifying human body behavior actions in the video, and particularly relates to a human body action recognition method based on a space-time attention mechanism. According to the attention mechanism based on a spatial transformation network provided by the invention, are area related to human motion is acquired, so that detail changes between actions are captured; according to the method for fusing the local area and the global feature, the representation ofthe human body action is enhanced; according to a global feature descriptor provided by the invention, the spatial information, the time information and the space-time interaction information are aggregated to distinguish human body behavior actions, so that the recognition effect is improved.

Description

technical field [0001] The invention belongs to the field of computer vision, relates to human body action recognition in video, and is used for locating and classifying human body action in video, specifically a human body action recognition method based on a spatio-temporal attention mechanism. Background technique [0002] In recent years, with the advent of the era of artificial intelligence and the rapid development of related technologies in the computer field, the research on human-computer interaction has attracted more and more people's attention, and the application fields of robots have become more and more extensive. In the human-robot interaction system, the robot needs to recognize human behavior from the acquired video data. Therefore, to achieve efficient and harmonious cooperation between humans and robots, it is necessary to accurately identify human behaviors. Although the research on human action recognition has made important progress in recent years, t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/49G06V10/25G06V10/44G06N3/045G06F18/214G06F18/253
Inventor 于华候亚庆葛宏伟周东生张强
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products