Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Action recognition method based on double-flow convolution attention

An action recognition and attention technology, applied in the computer field, can solve problems such as different importance, key information differences, and information fusion without considering spatial features and motion features, so as to increase diversity, enrich feature information, and ease timing. The effect of missing information

Active Publication Date: 2021-06-08
HANGZHOU DIANZI UNIV
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] There are still some deficiencies in the existing video action recognition methods: first, there are differences in the key information in different video frames, and the importance of different frames is not the same, so a single visual attention cannot effectively capture the key information; second, three-dimensional The convolutional neural network is limited by the size of the convolution kernel, and can only extract short-term dependent timing information of multiple frames in a small range, and lacks the extraction of long-term dependent timing information; third, most methods based on dual streams directly combine the actions of two features The recognition results are weighted and summed, and the information fusion of spatial features and motion features is not considered

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action recognition method based on double-flow convolution attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with accompanying drawing.

[0039] The action recognition method based on two-stream convolutional attention, first preprocess the given video and extract the appearance feature representation and motion feature representation; then input the two feature representations into the convolutional attention module to capture the appearance of the key content of the video Attention feature representation and motion attention feature representation; then the two attention feature representations are fused with each other through the dual-stream fusion module to obtain a dual-stream feature representation that combines appearance and motion information; finally, the dual-stream feature representation is used to determine the action category of the video content. This method uses the convolutional attention mechanism to capture the latent patterns of video actions, effectively characterize the temporal relation...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an action recognition method based on double-flow convolution attention. The method comprises the following steps: firstly, preprocessing a video to obtain a frame image sequence and an optical flow image sequence, and respectively extracting appearance feature representation and action feature representation of the video; then constructing a convolution attention module to obtain attention feature representations of the frame image and the optical flow image, and performing information fusion on the two attention representations through a double-flow fusion module; and then training an action recognition model using a convolution attention mechanism and a double-flow fusion method, and outputting the action category of the preprocessed new video according to the model. According to the method, channel attention and space-time attention are utilized to capture a potential mode and a space-time relationship of video action contents, and information fusion is carried out on appearance features and action features of the video from a global perspective through double-flow fusion, so that the problem of time sequence information loss of long-term time sequence dependence of the video is effectively relieved; and the accuracy of action recognition is improved.

Description

technical field [0001] The invention belongs to the technical field of computers, especially the technical field of action recognition in video analysis, and specifically relates to an action recognition method based on dual-stream convolution attention. Background technique [0002] In recent years, all kinds of video data have been increasing day by day, how to identify the action content of video has become a basic research topic for many video processing tasks. Action recognition technology mainly gives video action categories based on video content, and has very important social value in multiple application scenarios such as assisted driving, video content review, and personalized recommendation. For example, in vehicle assisted driving scenarios, motion recognition technology can help users issue instructions to the navigation system through gestures to improve people's driving comfort; in video content review, the motion recognition system can assist manual video con...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/46G06V20/41G06N3/048G06N3/045
Inventor 李平马浩男曹佳晨徐向华
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products