Video behavior identification method based on time sequence causal convolutional network

A technology of convolutional network and recognition method, which is applied in the field of video behavior recognition based on time-series causal convolutional network, can solve the problems of inability to apply real-time video stream and high computational cost, and achieve reduced computational cost, high computational efficiency, and reduced model The effect of capacity

Active Publication Date: 2019-08-27
FUDAN UNIV
View PDF4 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But this method can only handle offline video and cannot be applied to real-time video stre

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video behavior identification method based on time sequence causal convolutional network
  • Video behavior identification method based on time sequence causal convolutional network
  • Video behavior identification method based on time sequence causal convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Below in conjunction with accompanying drawing and embodiment, further describe the present invention.

[0030] figure 1 It shows the time-series causal three-dimensional convolutional neural network processing system diagram for online behavior recognition of the present invention. The system of the present invention includes an input video frame picture stream, a spatial convolution layer, a time sequence causal convolution and a causal self-attention mechanism network basic module, a behavior classifier and a progress regressor.

[0031] figure 2 It shows the temporal sequence causal convolution and spatial convolution fusion module diagram of the present invention for modeling short-term spatiotemporal features. The input feature map X is passed through two channels of temporal causal convolution with a convolution kernel of 3×1×1 and a spatial convolution with a convolution kernel of 1×3×3 to obtain two feature maps, and the elements of the two are added to obta...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of computer image analysis, and particularly relates to a video behavior identification method based on a time sequence causal convolutional network. The method comprises the following steps: extracting spatial-temporal semantic feature representation from a plurality of video clips by using a sequential causal three-dimensional convolutional neural network to obtain a predicted behavior category; and performing modeling on the frame sequence up to the current moment to extract space-time high-level semantic features for behavior positioning and precision prediction, wherein a fusion mechanism of spatial convolution and time sequence convolution and a causal space-time attention mechanism are designed. The method has the advantages of high precision, high calculation efficiency, real-time performance and the like, is suitable for online real-time video behavior detection and analysis tasks, and can also be used for offline video behavior recognition, abnormal event monitoring and other tasks.

Description

technical field [0001] The invention belongs to the technical field of computer image analysis, and in particular relates to a video behavior recognition method based on a sequential causal convolutional network. Background technique [0002] Video behavior detection and recognition is a classic task of computer vision. It is a very basic problem in the sub-direction of video understanding. It has been studied for many years so far. Because video data is not easy to label and analyze, and the modeling of spatiotemporal features is very difficult, the development of video behavior recognition technology is relatively slow. Under the innovation of deep learning technology, learning high-level semantic features of space and time through neural networks has become the mainstream. However, due to the large capacity of video data and the high computational cost of commonly used deep network models, practical video behavior recognition systems are still relatively scarce, and ther...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/42G06F18/241G06F18/253
Inventor 姜育刚程昌茂
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products