Video motion identification method based on sparse time slicing network

An action recognition and time segmentation technology, applied in the field of image processing, can solve the problems of large storage space, low recognition accuracy, and slow recognition speed, and achieve the effect of streamlining the model, improving the accuracy of action recognition, and being easy to deploy and implement

Inactive Publication Date: 2018-11-06
HUAZHONG UNIV OF SCI & TECH
View PDF4 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the defects of the prior art, the purpose of the present invention is to solve the technical problems of large storage space, low recognition accuracy and slow recognition speed in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video motion identification method based on sparse time slicing network
  • Video motion identification method based on sparse time slicing network
  • Video motion identification method based on sparse time slicing network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0049] figure 1 It is a schematic flowchart of a video action recognition method based on a sparse time segment network provided by an embodiment of the present invention. Such as figure 1 As shown, the method includes the following steps:

[0050] S1. Construct temporal convolutional neural network and spatial convolutional neural network;

[0051] S2. Prepare a training video set, extract information from each training video, and perform the first training and first optimization of the temporal convolutional neural network and the spatial convolutional neural network to minimize the loss funct...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video motion identification method based on a sparse time slicing network. The method includes steps: extracting information from each training video, and performing first training and optimization on the time slicing network; adding a sparse to the network after the first optimization, and performing second training and optimization; performing cutting and dimension adjustment on the network after the second optimization; performing third training and optimization on the network after dimension adjustment until the identification precision or the sparsity reaches theexpectation; and extracting information from a to-be-identified video, inputting the extracted information to the network after the third optimization, and performing output and fusion on the time slicing network to obtain a motion identification result. According to the method, the information of the longer video can be obtained through the time slicing network, and a double-flow convolutional network structure can fully utilize the video information and greatly improve the motion identification precision; and a structured sparse method can enable the weight of a convolution layer to be sparse in groups, a model is further simplified by network cutting, and the storage space is reduced.

Description

technical field [0001] The invention belongs to the field of image processing, and more specifically relates to a video action recognition method based on a sparse time segment network. Background technique [0002] Video action recognition is to process the data of the input video, and then design an algorithm to analyze the human behavior in the video and recognize the human action. In 2014, Simonyan et al. proposed a video action recognition method based on a two-stream convolutional network model. The model consists of two neural networks. The first is a spatial neural network, and the input data is a traditional single RGB image. The second It is a temporal neural network, and the input data is the optical flow map corresponding to the RGB image of the first network. The optical flow map is calculated from two RGB images at adjacent moments. By calculating the changes in pixels between the two images, an optical flow map containing change information can be obtained, s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/41G06F18/2136
Inventor 温世平曾小芬黄廷文
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products