Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video behavior recognition method based on spatial-temporal feature fusion deep learning network

A technology of deep learning network and spatio-temporal features, which is applied in the field of video behavior recognition based on spatio-temporal feature fusion deep learning network, can solve problems such as confusion, insufficiency, complex network structure training volume, etc., and achieve the goal of improving influence and recognition accuracy Effect

Pending Publication Date: 2020-11-17
BEIJING NORMAL UNIV ZHUHAI
View PDF3 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The above methods all have the problem of long-term dependence and insufficient mining of spatial features. Due to the persistence of behavior, in order to accurately identify a behavior, it often depends on a longer time segment. Without long-term analysis, it is easy to put a Behavior recognition is other behaviors. Although long-term segment analysis can improve the recognition accuracy, too long time analysis will bring more complex network structure and double the amount of training; and the current research is on the extraction of video dynamic temporal features. Obvious deficiencies, only obtained from RGB (Red-Green-Blue) through C3D or cyclic neural network, but a single feature is still not enough to fully extract the dynamic features of the time dimension of the video, and the extraction of spatial features is also not sufficient, the existing network model easy to confuse and misjudgment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video behavior recognition method based on spatial-temporal feature fusion deep learning network
  • Video behavior recognition method based on spatial-temporal feature fusion deep learning network
  • Video behavior recognition method based on spatial-temporal feature fusion deep learning network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0064] A kind of video behavior recognition method based on spatio-temporal feature fusion deep learning network, its characteristic comprises the following steps:

[0065] A kind of video behavior recognition method based on spatio-temporal feature fusion deep learning network, its characteristic comprises the following steps:

[0066] (1) Expand the data set through three methods: horizontal mirror inversion, small-angle rotation, and cropping. The small-angle rotation is clockwise rotation of 30°, 15°, -15° and -30° respectively, and two independent networks are used The time and space informat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video behavior identification method based on a spatio-temporal feature fusion deep learning network, which adopts two independent networks to extract time and space information of a video respectively, adds LSTM learning video time information to each network on the basis of a CNN, and fuses the time and space information with a certain strategy. The accuracy of the FSTFN on a data set is improved by 7.5% compared with a network model without introducing a space-time network proposed by Tran, the accuracy on the data set is improved by 4.7% compared with a common double-flow network model, a segmentation mode is adopted for a video, each video sample samples a network composed of a plurality of segments and inputs the segments into a CNN and an LSTM, and by covering the time range of the whole video. The problem of long-term dependence in video behavior recognition is solved, a visual attention mechanism is introduced to the tail end of the CNN, the weight ofa non-visual subject in a network model is reduced, the influence of the visual subject in a video image frame is improved, and the spatial features of the video are better utilized.

Description

【Technical field】 [0001] The invention relates to a video behavior recognition method, in particular to a video behavior recognition method based on spatio-temporal feature fusion deep learning network. 【Background technique】 [0002] Video content behavior recognition aims to classify video clips to determine behavior types. At present, video content behavior recognition technology is mainly divided into two directions: the traditional way of extracting features and the way of using deep learning to establish an end-to-end predictive network model. [0003] Based on the traditional behavior recognition method, the relevant visual features are first designed and extracted, and then these features are encoded, and finally the relevant classification methods in statistical machine learning are used to obtain the predicted classification results. [0004] Most of the deep learning network models are end-to-end models, using convolutional neural network (Convolutional Neural Ne...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V20/41G06V10/56G06N3/045G06F18/2415
Inventor 杨戈
Owner BEIJING NORMAL UNIV ZHUHAI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products