Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-camera cooperative target tracking method based on deep learning

A target tracking and multi-camera technology, applied in the field of multi-camera collaborative target tracking based on deep learning, can solve the problems of discrete image information and difficulty in information integration, and achieve the effect of improving the degree of intelligence and improving the degree of intelligence

Inactive Publication Date: 2020-10-09
EAST CHINA NORMAL UNIV
View PDF2 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In the multi-camera monitoring system of the prior art, each camera is independent of each other, the collected image information is discrete, it is difficult to realize information integration, and it is impossible to realize the identification, positioning and tracking of specific targets

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-camera cooperative target tracking method based on deep learning
  • Multi-camera cooperative target tracking method based on deep learning
  • Multi-camera cooperative target tracking method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0033] Step 1: Construct the spatial data model of the video surveillance system

[0034] The spatial data model of the video surveillance environment is established by using vector data, and the spatial data model is divided into three parts: the three-dimensional spatial expression of the surveillance environment, the video position and view, and the target spatial position and trajectory information:

[0035] 1) Three-dimensional space representation of the monitoring environment

[0036]The custom O-XYZ three-dimensional coordinates are used to represent the three-dimensional monitoring environment. The coordinate origin O can be set as a certain feature point in the monitoring environment, the XOY plane represents the two-dimensional monitoring ground, and the Z axis represents the height information. The three-dimensional objects in the monitoring environment can be mapped to the two-dimensional monitoring ground and can be abstractly expressed as point, line, and surfac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-camera cooperative target tracking method based on deep learning. The method is characterized in that a Faster R-CNN is adopted to carry out target detection; obtainingthe real position information of the target through the mapping relation between the image coordinates and the plane geographic space coordinates; target detection and tracking in a multi-camera video monitoring scene are realized, and the method specifically comprises the steps of constructing a spatial data model of a video monitoring system, constructing a mapping model, performing target detection based on deep learning and performing multi-camera cooperative target tracking. Compared with the prior art, the method has the advantages that the target tracking task is undertaken by screening one camera; the specific target is identified, positioned and tracked in a multi-camera video monitoring scene, the intelligent degree of a video monitoring system is improved, and the problems thatimage information is discrete, information is difficult to integrate, and the specific target is identified, positioned and tracked are well solved.

Description

technical field [0001] The invention relates to the technical field of video surveillance systems, in particular to a multi-camera cooperative target tracking method based on deep learning. Background technique [0002] At present, the analysis of surveillance video is mainly based on the video itself. Most of the target detection adopts the image processing method based on computer vision (Computer Vision), which is very sensitive to complex scenes and easily affects the target detection effect. In addition, target detection methods based on computer vision cannot directly extract a specific target from many moving targets, so this method lacks targeted detection. The target detection method based on deep learning is not limited to the motion state of the target, but can also realize the detection of specific targets. With the increasingly powerful GPU computing power, real-time object detection based on deep learning has been realized, and the research idea of ​​combining...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/292G06T7/246G06T7/73G06N3/04G06N3/08
CPCG06T7/292G06T7/246G06T7/73G06T2207/10016G06T2207/30232G06N3/08G06N3/045
Inventor 陈渠李响
Owner EAST CHINA NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products