Unlock instant, AI-driven research and patent intelligence for your innovation.

A streaming deep network model for video action recognition

A deep network, action recognition technology, applied in the field of computer vision, can solve the problem that the deep learning method has not made breakthrough progress, and achieve the effect of high accuracy and high efficiency

Active Publication Date: 2019-08-09
JIANGXI UNIV OF SCI & TECH
View PDF10 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, unlike other fields of computer vision (image classification, face recognition, pose estimation), the current research on deep learning methods in the direction of action recognition has not made breakthrough progress, and its recognition effect is only slightly better than traditional methods. one-up

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A streaming deep network model for video action recognition
  • A streaming deep network model for video action recognition
  • A streaming deep network model for video action recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] In order to make the technical means, creative features, goals and effects achieved by the present invention easy to understand, the present invention will be further described below in conjunction with specific embodiments.

[0020] Such as Figure 1-6 As shown, a streaming deep network model for video action recognition, the streaming deep network model includes the following steps: divide the video into video frames, calculate the optical flow information between frames, and generate two horizontal and vertical A kind of optical flow picture; After the data amplification method such as flipping and cutting the video frame, input it into the spatial flow network for training to obtain the spatial flow network model, and stack 10 optical flow pictures in the horizontal and vertical directions, a total of 20 optical flow pictures into a group and then Perform flipping and cutting, and then input the time flow network for training to obtain the time flow network model; u...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a streaming deep network model for video action recognition. The action recognition research method in a video is from an early traditional method to a deep learning method atrecent years, and a double-flow method combining the spatial information and the time information in the deep learning method is the most mainstream method in the current action recognition field. Themodel of the present invention is improved on the basis of the double-flow method; a spatial flow model with a better effect is obtained on the spatial flow by adopting an iterative interactive training mode; an improved novel time feature extraction network based on the residual network is provided on the time flow, and finally, the trained space flow and time flow form an overall serial flow classification model in a multi-layer classification mode in combination with the respective classification advantages of the space flow and the time flow. According to the model method, a test is carried out on a UCF101 data set, the single spatial flow is improved by 1.21% compared with an original method, the time flow is improved by 1.42% compared with the original method, and the final model isgreatly improved by about 6% compared with the single spatial flow and the single time flow.

Description

technical field [0001] The invention belongs to the field of computer vision, in particular to a streaming deep network model for video action recognition. Background technique [0002] Action recognition in video is a very active and challenging research hotspot in the current field of computer vision. Unlike still image classification, action recognition in a video needs to consider not only spatial information but also temporal information. [0003] Although some practical applications of motion recognition can also be seen in real life, they only stay at some relatively shallow application levels. At present, even the best action recognition methods are far from meeting people's expectations when faced with actual complex scenes. Early action recognition methods were based on manual feature extraction. With the rise of deep learning and convolutional neural networks, like other fields of computer vision, the research on action recognition has gradually shifted from tra...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/00
CPCG06V40/20G06F18/24G06F18/214
Inventor 罗会兰文彪
Owner JIANGXI UNIV OF SCI & TECH