Double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement

A technology of spatiotemporal features and recognition methods, applied in character and pattern recognition, biological neural network models, instruments, etc., can solve problems such as weakening effects, and achieve the effect of improving the accuracy of behavior recognition

Active Publication Date: 2020-09-25
JIANGNAN UNIV
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the traditional dual-stream network still faces the following problems: (1) How to effectively use the information captured by the two streams separately? (2) Treating each region and channel of features equally in the n

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement
  • Double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement
  • Double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] figure 2 It is an overall model diagram of the present invention;

[0028] figure 2 It represents the algorithm model diagram of the present invention. The algorithm takes multi-segment RGB images and optical flow maps as input, and the model includes five key parts: spatial network, temporal network, feature fusion network, multi-segment category probability distribution fusion and multi-flow category probability distribution fusion. Both the spatial network and the temporal network are built based on InceptionV3, and the feature fusion network is constructed through the spatial network and the temporal network. Simply put, the proposed multi-level spatiotemporal feature fusion module is used to fuse spatiotemporal mixed features of different depth levels, where the spatiotemporal mix The feature is to use the proposed spatio-temporal feature fusion module to fuse the features extracted from the spatial network and the temporal network, and then use the proposed gr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement. According to the method, a network architecture based on a space-time double-flow network is adopted, and the network architecture is called as a multi-level space-time feature fusion enhancement network. The method aims at solving the problems that the effectof shallow features is ignored and the complementary features of a double-flow network cannot be fully utilized due to the fact that category probability distribution of two flows is only fused at the last layer in a traditional double-flow network. According to the method, a multi-level spatial-temporal feature fusion module is provided, and multi-depth-level mixed features are captured throughthe spatial-temporal feature fusion module at different depth levels of double flows so as to make full use of a double-flow network. In addition, in the network, all features weakens the effect of those features that contribute greatly to classification are equally treated. According to the method, a grouping attention enhancing module is provided in the network, and the saliency of effective areas and channels on features is automatically enhanced. Finally, the robustness of the behavior recognition model is further improved by collecting the classification results of the double-flow networkand feature fusion.

Description

technical field [0001] The invention belongs to the field of machine vision, and in particular relates to a dual-stream network behavior recognition method based on fusion and enhancement of multi-level spatio-temporal features. Background technique [0002] Action recognition has become an active field in the computer vision community and is widely used in various fields such as video surveillance, violence detection, human-computer interaction, etc. Video action recognition is to mine the key features that can express the target action represented by the video. Compared with static images, it contains rich motion information. However, the diversity of action scenes still makes the extraction of effective features challenging. Therefore, the present invention takes video as the research object, and proposes a unique feature fusion method and attention method to effectively extract discriminative features for behavior recognition in view of the problems faced by the network ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V40/20G06N3/045G06F18/2415G06F18/256G06F18/253
Inventor 孔军王圣全蒋敏
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products