Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Violent Behavior Recognition Method Based on Temporal Guided Spatial Attention

A technology of guiding space and recognition methods, applied in the field of violent behavior recognition based on temporal sequence guiding spatial attention, can solve the problems of large amount of parameters in 3D convolutional networks, difficulty in meeting real-time requirements, small amount of parameters, etc., to reduce background interference , Reduce missed detection and improve accuracy

Active Publication Date: 2022-04-22
XI AN JIAOTONG UNIV +1
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Violent behavior recognition methods based on deep learning can be divided into three categories. One is to use a dual-stream structure of RGB and optical flow, which needs to extract and save optical flow in advance, and the process of extracting optical flow will consume a lot of time and space resources. Therefore, Difficult to meet real-time requirements
The second type of method adopts a 3D convolutional network structure. Although this type of method has a faster recognition speed, it is difficult to apply it in practice because the parameters of the 3D convolutional network are usually large and require high hardware.
The third type of method uses the convolutional long-short-term memory network (ConvLSTM) structure, because each frame shares the ConvLSTM network parameters in timing, which has the advantage of a small number of parameters, but there is still the problem of being susceptible to background interference, especially when moving objects When it is small, the missing detection phenomenon is obvious

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Violent Behavior Recognition Method Based on Temporal Guided Spatial Attention
  • A Violent Behavior Recognition Method Based on Temporal Guided Spatial Attention
  • A Violent Behavior Recognition Method Based on Temporal Guided Spatial Attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention is described in detail below in conjunction with the accompanying drawings:

[0034] like figure 1 As shown, a kind of violent behavior recognition method based on temporal sequence guiding spatial attention provided by the present invention, the following steps:

[0035] 1) Two-stream feature extraction and fusion For the input continuous video sequence, a deep convolutional neural network is used to extract the features of the RGB image and the frame difference image respectively, and the two-stream features are fused for a temporally guided spatial attention module.

[0036] 2) The timing-guided spatial attention module uses the temporal features output by ConvLSTM to guide the spatial attention module to assign different weights to different spatial regions of the feature, and guide the network to pay attention to the moving region. Finally, the recognized categories and scores are output according to the weighted features.

[0037] Specifica...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a violent behavior recognition method based on temporal sequence guided spatial attention. The method adopts a deep convolutional network with dual-stream parameter sharing to extract RGB image and frame difference image features, which are respectively used as representations of space domain and time domain information, and The fusion of two-stream features improves the ability of features to represent violent behavior; in the timing-guided spatial attention module, the hidden timing state of ConvLSTM is used to guide the strategy of spatial attention weights. Compared with traditional self-attention, timing The guided spatial attention assigns spatial weights according to the global motion information, guides the network to focus on the motion area, ignores the interference of background information, and increases the proportion of motion area features to reduce missed detection when the target is small. The test results on the public data set verify the effectiveness of the present invention in improving the performance of violent behavior recognition.

Description

technical field [0001] The invention belongs to the field of behavior recognition, and in particular relates to a violent behavior recognition method based on temporal sequence guiding spatial attention. Background technique [0002] Violent behavior affects social order and endangers public safety. Timely identification and early warning of violent behavior and the containment of violent incidents are of great significance to public security. The traditional manual monitoring method not only consumes a lot of manpower, but also is prone to missed inspections caused by the inattention of the monitors. In recent years, methods based on deep learning to identify behaviors have received extensive attention, which has also led to the improvement of the performance of violent behavior detection algorithms. [0003] Violent behavior recognition methods based on deep learning can be divided into three categories. One is to use a dual-stream structure of RGB and optical flow, which...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/20G06V10/62G06V10/80G06V10/82G06K9/62G06T7/254G06N3/04
CPCG06T7/254G06T2207/10016G06T2207/20224G06T2207/30232G06T2207/30196G06V40/20G06N3/044G06N3/045G06F18/24G06F18/253
Inventor 李凡张斯瑾贺丽君
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products