Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Video Behavior Recognition Method Based on Deep Convolutional Features

A deep convolution and recognition method technology, applied in the field of computer vision, can solve the problems of ignoring the motion characteristics, not considering the trajectory characteristics and its timing, and the accuracy of the classification effect is not high, so as to achieve the effect of accurate feature extraction and improved accuracy

Active Publication Date: 2021-12-21
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with manual traditional features, deep learning features can learn more distinguishing and hierarchical features, without considering trajectory features and their timing, ignoring motion characteristics, resulting in low accuracy of final classification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Behavior Recognition Method Based on Deep Convolutional Features
  • A Video Behavior Recognition Method Based on Deep Convolutional Features
  • A Video Behavior Recognition Method Based on Deep Convolutional Features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0022] This embodiment provides a video behavior recognition method based on deep convolution features, the flow chart of the method is as follows figure 1 shown, including the following steps:

[0023] S1. Obtain training data: Obtain the videos and corresponding labels in the training video dataset, extract each frame at a certain frame rate, and obtain the training samples and their categories, which include all behavior types involved in the videos in the training dataset ;Extract the dense trajectory of the video: every 15 frames, use the grid method for dense sampling, use the dense trajectory algorithm to track the sampling points within these 15 frames, obtain the trajectory of each sampling point, and remove static trajectory and excessive changes The trajectory of the video is obtained to obtain the dense trajectory of the video;

[0024] S2. Extract the deep convolutional spatial features of the video: input the video sequence into the pre-trained spatial neural ne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video behavior recognition method based on depth convolution features, comprising the following steps: 1) extracting the dense trajectory of the video; 2) extracting the depth convolution space features of the video; 3) calculating the video optical flow and extracting the depth convolution 4) Perform space-time normalization and inter-channel normalization on the deep convolutional spatial features and deep convolutional temporal features respectively; 5) Normalize the spatial features and temporal features along the Time series pooling operation is performed on dense trajectories; 6) After pooling spatial features and temporal features are connected, LSTM network is used for classification. In the process of combining deep learning features and trajectory features, the method considers the timing information of trajectory features, can more effectively use video trajectory information, and make feature extraction more accurate. Finally, the LSTM network is used as a classifier, which advantageously improves the Accuracy of behavior recognition.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a video behavior recognition method based on deep convolution features. Background technique [0002] Video, as a carrier that can carry more information than pictures, has gradually become one of the most important visual data in life. As a basic technology for video analysis and understanding, video behavior recognition technology is attracting more and more attention from scholars and engineers. On the one hand, behavior recognition is widely used in life and production, such as realizing intelligence and automatic driving. On the other hand, behavior recognition can promote the development of video analysis and understanding technology, and further promote the advancement of technologies such as network video transmission, storage, and network video personalized recommendation. [0003] Compared with image classification tasks, video-based classification needs to co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V20/46G06N3/045G06F18/241
Inventor 许勇张银珠
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products