Monitoring video multi-target tracking method based on deep learning

A monitoring video and deep learning technology, applied in the field of target tracking, can solve problems such as slow tracking speed, achieve strong expressive ability, strong predictive ability, and improve accuracy

Active Publication Date: 2017-11-07
HUAZHONG UNIV OF SCI & TECH
View PDF3 Cites 67 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A common way to solve this problem is to divide the target into multiple spatial regions, and set up a tracker for each spatial region. When it is blocked,

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monitoring video multi-target tracking method based on deep learning
  • Monitoring video multi-target tracking method based on deep learning
  • Monitoring video multi-target tracking method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0049] The method process of the present invention is as figure 1 Shown:

[0050] (1) Video frame decoding:

[0051] (11) Set the decoding interval frame number on the basis of extracting 4 frames per second. If the video fps is 24, the interval frame number is 6;

[0052] (12) Use Opencv to decode the image in real time to the video according to the decoding interval frame number;

[0053] ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a monitoring video multi-target tracking method based on deep learning. The method includes the steps of firstly encoding a video, extracting an image sequence from the encoded video, then preprocessing and inputting images into a trained Faster R-CNN network model, the Faster R-CNN network model extracting target location information and target spatial features from corresponding layers of the network; inputting the target location information and the target spatial features into an LSTM network, and predicting locations of targets at a next moment; and subjecting the spatial features of the targets to a fusion method to obtain fusion features of the targets at the next moment, adding different weights through the location similarity and spatial feature similarity to obtain the final similarity, and then judging the corresponding relationship of the multiple targets detected at the current moment and the multiple targets in a tracking state at the previous moment. By target tracking, the method of the invention can reduce the missed detection rate of multi-target tracking, improve the accuracy of multi-target tracking, and solve the target occlusion problem in a short time in the tracking process.

Description

technical field [0001] The invention belongs to the technical field of target tracking, and more specifically relates to a multi-target tracking method for surveillance video based on deep learning. Background technique [0002] The construction of safe cities and the popularization of high-definition cameras have produced massive surveillance videos. It is very time-consuming and difficult to collect clues from massive video image data by manpower alone, and it is difficult to identify targets due to human factors such as visual fatigue. Nothing goes wrong. With the development of visual computing, from machine learning to deep learning, computers can understand the information in videos more intelligently, and video intelligent analysis systems have emerged as the times require. The video intelligent analysis system extracts the moving target of each frame by analyzing the continuous image sequence, determines the target relationship in adjacent frames through tracking te...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/246G06N3/02G06N3/08
CPCG06T7/246G06T2207/10016G06T2207/20081G06T2207/20084
Inventor 凌贺飞李叶李平
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products