Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Human body action recognition method based on separable three-dimensional residual attention network

A human action recognition and three-dimensional technology, applied in the field of computer vision, can solve problems such as optimization difficulties of deep three-dimensional convolution models, achieve the effect of enhancing classification accuracy and recognition efficiency, alleviating optimization difficulties, and improving discrimination ability

Active Publication Date: 2021-07-02
CHONGQING UNIV OF POSTS & TELECOMM
View PDF6 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In view of this, the purpose of the present invention is to provide a human action recognition method based on a separable three-dimensional residual attention network, which adopts a reasonable kernel structure decomposition operation to alleviate the difficult phenomenon of deep three-dimensional convolution model optimization, and combines the attention mechanism to Improve the flexibility of key feature screening, so as to produce higher quality spatio-temporal visual features to improve the recognition performance of the model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human body action recognition method based on separable three-dimensional residual attention network
  • Human body action recognition method based on separable three-dimensional residual attention network
  • Human body action recognition method based on separable three-dimensional residual attention network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] The embodiments of the present invention are described below through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the contents disclosed in this specification. The present invention can also be implemented or applied through other different specific embodiments, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that the drawings provided in the following embodiments are only used to illustrate the basic idea of ​​the present invention in a schematic manner, and the following embodiments and features in the embodiments can be combined with each other without conflict.

[0056] see Figure 1 to Figure 5 , the present invention designs a human action recognition method based on a separable three-dimensional residual attention network,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a human body action recognition method based on a separable three-dimensional residual attention network, and belongs to the field of computer vision. The method comprises the following steps: S1, replacing a standard three-dimensional convolution in a 3D ResNet with a separable three-dimensional convolution, and constructing a Sep-3D ResNet; S2, designing a channel attention module and a space attention module, and stacking the modules in sequence to construct a dual attention mechanism; S3, carrying out double attention weighting on the middle-layer convolution features at different moments, a double attention module is expanded in the time dimension, then the double attention module is embedded into a Sep-3D RAB of the Sep-3D ResNet, and a Sep-3D RAN is built; and S4, carrying out joint end-to-end training on the Sep-3D RAN by utilizing a multi-stage training strategy. The method can improve the distinguishing capability of classification and discrimination features, achieves the efficient extraction of high-quality space-time visual features, and can enhance the classification precision and recognition efficiency of the model.

Description

technical field [0001] The invention belongs to the field of computer vision, and relates to a human action recognition method based on a separable three-dimensional residual attention network. Background technique [0002] There is a huge amount of information hidden in the video. The huge number of users in the online video market and the rapidly growing market size have brought great challenges to the management, storage and identification of online video. Therefore, the online video business has been paid more and more attention by all parties. In the human-centered computer vision research field, the human action recognition task has become an important research direction in computer vision tasks due to its wide application in many fields such as human-computer interaction, smart home, autonomous driving, and virtual reality. The main task of human action recognition is to spontaneously identify human actions in image sequences or videos. By processing and analyzing ima...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/42G06V20/46G06V10/462G06N3/047G06N3/048G06N3/045G06F18/2415
Inventor 张祖凡彭月甘臣权张家波
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products