Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Action Recognition Method Based on Two-Stream Convolutional Attention

An attention and convolution technology, applied in the computer field, can solve the problems of not considering spatial feature and motion feature information fusion, different degrees of importance, and differences in key information, etc., to alleviate the lack of time series information, increase diversity, and enrich features. The effect of the amount of information

Active Publication Date: 2022-05-13
HANGZHOU DIANZI UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] There are still some deficiencies in the existing video action recognition methods: first, there are differences in the key information in different video frames, and the importance of different frames is not the same, so a single visual attention cannot effectively capture the key information; second, three-dimensional The convolutional neural network is limited by the size of the convolution kernel, and can only extract short-term dependent timing information of multiple frames in a small range, and lacks the extraction of long-term dependent timing information; third, most methods based on dual streams directly combine the actions of two features The recognition results are weighted and summed, and the information fusion of spatial features and motion features is not considered

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Action Recognition Method Based on Two-Stream Convolutional Attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with accompanying drawing.

[0039] The action recognition method based on two-stream convolutional attention, first preprocess the given video and extract the appearance feature representation and motion feature representation; then input the two feature representations into the convolutional attention module to capture the appearance of the key content of the video Attention feature representation and motion attention feature representation; then the two attention feature representations are fused with each other through the dual-stream fusion module to obtain a dual-stream feature representation that combines appearance and motion information; finally, the dual-stream feature representation is used to determine the action category of the video content. This method uses the convolutional attention mechanism to capture the latent patterns of video actions, effectively characterize the temporal relation...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an action recognition method based on two-stream convolutional attention. The method of the present invention first preprocesses the video to obtain the frame image sequence and optical flow image sequence, and extracts the appearance feature representation and action feature representation of the video respectively; then constructs a convolution attention module to obtain the attention feature representation of the frame image and optical flow image , and perform information fusion on the two attention representations through the two-stream fusion module; then train the action recognition model using the convolutional attention mechanism and the two-stream fusion method, and output the action category of the preprocessed new video according to the model. The method of the present invention not only uses channel attention and spatio-temporal attention to capture the potential pattern and spatio-temporal relationship of video action content, but also fuses the appearance features and motion features of the video from a global perspective through dual-stream fusion, effectively alleviating the long-term timing dependence of video The lack of timing information improves the accuracy of action recognition.

Description

technical field [0001] The invention belongs to the technical field of computers, especially the technical field of action recognition in video analysis, and specifically relates to an action recognition method based on dual-stream convolution attention. Background technique [0002] In recent years, all kinds of video data have been increasing day by day, how to identify the action content of video has become a basic research topic for many video processing tasks. Action recognition technology mainly gives video action categories based on video content, and has very important social value in multiple application scenarios such as assisted driving, video content review, and personalized recommendation. For example, in vehicle assisted driving scenarios, motion recognition technology can help users issue instructions to the navigation system through gestures to improve people's driving comfort; in video content review, the motion recognition system can assist manual video con...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/20G06V20/40G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/46G06V20/41G06N3/048G06N3/045
Inventor 李平马浩男曹佳晨徐向华
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products