Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Violent behavior recognition method based on sequential guidance of spatial attention

A technology that guides space and recognition methods. It is applied in character and pattern recognition, image data processing, image enhancement, etc. It can solve the problems of 3D convolutional network parameters such as large amount of parameters, large time and space resource consumption, and difficulty in meeting real-time requirements. , to achieve good application value, improve accuracy, and reduce interference

Active Publication Date: 2020-07-14
XI AN JIAOTONG UNIV +1
View PDF9 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Violent behavior recognition methods based on deep learning can be divided into three categories. One is to use a dual-stream structure of RGB and optical flow, which needs to extract and save optical flow in advance, and the process of extracting optical flow will consume a lot of time and space resources. Therefore, Difficult to meet real-time requirements
The second type of method adopts a 3D convolutional network structure. Although this type of method has a faster recognition speed, it is difficult to apply it in practice because the parameters of the 3D convolutional network are usually large and require high hardware.
The third type of method uses the convolutional long-short-term memory network (ConvLSTM) structure, because each frame shares the ConvLSTM network parameters in timing, which has the advantage of a small number of parameters, but there is still the problem of being susceptible to background interference, especially when moving objects When it is small, the missing detection phenomenon is obvious

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Violent behavior recognition method based on sequential guidance of spatial attention
  • Violent behavior recognition method based on sequential guidance of spatial attention
  • Violent behavior recognition method based on sequential guidance of spatial attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention is elaborated below in conjunction with accompanying drawing:

[0034] Such as figure 1 As shown, a kind of violent behavior recognition method based on temporal sequence guiding spatial attention provided by the present invention, the following steps:

[0035] 1) Two-stream feature extraction and fusion For the input continuous video sequence, the deep convolutional neural network is used to extract the features of the RGB image and the frame difference image respectively, and the two-stream features are fused for the temporally guided spatial attention module.

[0036] 2) The timing-guided spatial attention module uses the temporal features output by ConvLSTM to guide the spatial attention module to assign different weights to different spatial regions of the feature, and guide the network to pay attention to the moving region. Finally, the recognized categories and scores are output according to the weighted features.

[0037] Specifically, in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a violent behavior recognition method based on sequential guidance of spatial attention. According to the method, RGB image and frame difference image features are extracted byadopting a deep convolutional network with double-flow parameter sharing, the RGB image and frame difference image features are respectively used as representations of spatial domain and time domaininformation, and the double-flow features are fused, so that the characterization capability of the features for violent behaviors is improved; a spatial attention module is guided in a time sequence;a strategy of guiding a spatial attention weight by adopting an implicit time sequence state of ConvLSTM is adopted; compared with the traditional self-attention, the spatial attention guided by thetime sequence is endowed with a spatial weight according to the global motion information, the network is guided to pay attention to the motion area, the interference of background information is ignored, and meanwhile, the missed detection when the target is relatively small can be reduced by increasing the proportion of the motion area characteristics. The test result on the public data set verifies the effectiveness of the method for improving the violent behavior recognition performance.

Description

technical field [0001] The invention belongs to the field of behavior recognition, and in particular relates to a violent behavior recognition method based on temporal sequence guiding spatial attention. Background technique [0002] Violent behavior affects social order and endangers public safety. Timely identification and early warning of violent behavior and the containment of violent incidents are of great significance to public security. The traditional manual monitoring method not only consumes a lot of manpower, but also is prone to missed inspections caused by the inattention of the monitors. In recent years, methods based on deep learning to identify behaviors have received extensive attention, which has also led to the improvement of the performance of violent behavior detection algorithms. [0003] Violent behavior recognition methods based on deep learning can be divided into three categories. One is to use a dual-stream structure of RGB and optical flow, which...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06T7/254G06N3/04
CPCG06T7/254G06T2207/10016G06T2207/20224G06T2207/30232G06T2207/30196G06V40/20G06N3/044G06N3/045G06F18/24G06F18/253
Inventor 李凡张斯瑾贺丽君
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products