Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video image classification method based on time-space co-occurrence double-flow network

A technology of video images and classification methods, applied in biological neural network models, neural learning methods, instruments, etc.

Inactive Publication Date: 2017-03-01
SHENZHEN WEITESHI TECH
View PDF1 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

So far, no methods based on neural network methods have been used to classify objects in videos

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video image classification method based on time-space co-occurrence double-flow network
  • Video image classification method based on time-space co-occurrence double-flow network
  • Video image classification method based on time-space co-occurrence double-flow network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0043] figure 1 It is a system flowchart of a video image classification method based on a time-space co-occurrence dual-stream network of the present invention. It mainly includes data input; spatio-temporal dual-stream network; fusion; SVM classifier.

[0044] The data input includes image and optical flow information, and the data set consists of 100 kinds of monkey video sets; the data set is divided into training set and test set. Recording monkey videos at a certain distance, this dataset has large challenges, such as large-scale camera motion changes and considerable pose changes; for each class (monkey species) the following data is provided: Video clips with active annotation...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video image classification method based on a time-space co-occurrence double-flow network. The main content of the video image classification method comprises data input, a time-space double-flow network, fusion and an SVM classifier. The process comprises the steps of firstly inputting image and light stream information, performing early fusion by being combined with a time network and a space network, enabling fusion output to act as a feature vector, inputting the feature vector into the SVM classifier, and acquiring a final classification result. According to the invention, a method that the early fusion double-flow network is combined with time information and space information (time-space co-occurrence) is adopted, a video data set of the monkey is utilized, and more frames, namely, more space data, are utilized from each piece of video so as to generate significant improvement in precision; the space information and the time information are combined and form mutual complementation, and the precision reaches 65.8%. A smaller number of separated clusters are formed through using the co-occurrence method, the separated clusters generally stay together more closely, and the time information is better utilized.

Description

technical field [0001] The invention relates to the field of video image classification, in particular to a video image classification method based on a space-time co-occurrence dual-stream network. Background technique [0002] Video image classification is a very challenging problem because pose and appearance variations cause large intra-type variations, as well as small intra-type variations caused by subtle differences in overall appearance between types. Recently, deep convolutional neural networks (DCNNs) have been used to learn many powerful features, handle large changes with hierarchical models, and localize regions automatically. Despite the advances in these methods, previous work treats the object classification task as a still image classification problem, ignoring the complementary temporal information present in videos. So far, no neural network based methods have been used to classify objects in videos. [0003] The present invention introduces the problem...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/08
CPCG06N3/08G06F18/2411G06F18/25
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products