Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-camera road space fusion and vehicle target detection tracking method and system

A space fusion and target detection technology, applied in the field of intelligent transportation, can solve problems such as the limited monitoring range of camera sensors

Pending Publication Date: 2020-12-04
CHANGAN UNIV
View PDF7 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In view of the defects and deficiencies in the prior art, the present invention provides a cross-camera road space fusion and vehicle target detection and tracking method and system, which overcomes the defects of the limited monitoring range of the existing camera sensor

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-camera road space fusion and vehicle target detection tracking method and system
  • Cross-camera road space fusion and vehicle target detection tracking method and system
  • Cross-camera road space fusion and vehicle target detection tracking method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0047] Such as Figure 1 to Figure 7 As shown, the present invention discloses a method and system for cross-camera road space fusion and vehicle target detection and tracking. The detailed steps are as follows:

[0048] Step 1, input scene 1 and scene 2 traffic scene background image p 1 and p 2 , video frame image sequence group s 1 and s 2 ; Wherein, the background image refers to the image that does not contain the vehicle target on the image, and the video frame image refers to the image extracted from the original video collected by the camera;

[0049] Step 2, construct the coordinate system and model, and complete the camera calibration. Background image p from step 1 1 and p 2 Extract the vanishing point, establish the camera model and coordinate system (world coordinate system, image coordinate system), the two-dimensional enveloping frame model of the vehicle target in the image coordinate system, and combine the vanishing point for camera calibration to obtai...

Embodiment 2

[0084] This embodiment provides a cross-camera road space fusion and vehicle target detection and tracking system, the system includes:

[0085] The data input module is used to input multiple traffic scene background images that need to be spliced ​​and the video frame image sequence group corresponding to the scene that contains the vehicle;

[0086] The camera calibration module is used to establish the camera model and coordinate system, the two-dimensional envelope model of the vehicle target in the image coordinate system, perform camera calibration, obtain camera calibration parameters and the final scene two-dimensional-three-dimensional transformation matrix;

[0087] The control point identifies the road locale module, which is used in the p 1 and p 2 In , respectively set two control points to identify the range of the road area. The control points are located at the centerline of the road. Set the world coordinates and image coordinates of the control points in sc...

Embodiment 3

[0092] In order to verify the validity of the method proposed by the present invention, an embodiment of the present invention adopts the following Figure 4 Shown is a set of real road traffic scene images in which a single vanishing point along the road direction is identified and the camera is calibrated. Such as Figure 5 As shown, it is the result map of road space fusion using the method proposed in the present invention. On this basis, the vehicle target in the video frame sequence group is detected by the deep network method, combined with the road space fusion results, the cross-camera vehicle target detection and tracking are completed, and the results are as follows: Figure 7 shown.

[0093] The experimental results show that the road space fusion completed by this method has high precision and can better complete the cross-camera vehicle target detection and tracking. The experimental results are shown in Table 1. The experimental results show that this method c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-camera road space fusion and vehicle target detection tracking method and system, and the method comprises the steps: carrying out background image extraction and scenecalibration of two traffic scenes to acquire calibration parameters; dividing scene splicing areas respectively, setting pixel distance ratio parameter groups in length and width directions, and generating a blank space fusion image; taking out pixels in the sub-scenes are and putting the pixels into the blank space fusion image to acquire a fusion image with space information; and detecting a vehicle target in a continuous image sequence by using a deep neural network Yolov3 trained for a vehicle data set to obtain two-dimensional envelope model parameters, and completing cross-camera vehicle target detection tracking in combination with space fusion information. The invention can adapt to a continuous road traffic scene containing a public area, camera-crossing road space fusion is completed through camera calibration, a large number of vehicle targets in the scene are extracted in combination with the deep neural network to complete camera-crossing vehicle target detection and tracking, and the invention is easy to implement and has high universality.

Description

technical field [0001] The invention belongs to the technical field of intelligent transportation, and in particular relates to a method and system for cross-camera road space fusion and vehicle target detection and tracking. Background technique [0002] Cross-camera road space fusion has been widely used in the fields of virtual reality and computer vision. The common fusion methods mainly include professional equipment acquisition and image processing. The former can get a better fusion effect, but the equipment is expensive and the operation is complicated, which is not conducive to promotion; the latter collects multiple images including public areas through the camera, and uses image fusion and other technologies to complete spatial fusion. This method is low in cost and the fusion effect is high. It is better, has better application type, and is the main method of space fusion at present. [0003] Image fusion technology usually needs to rely on image grayscale, freq...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06T7/292G06T7/80G08G1/01G08G1/017G06K9/00
CPCG06T7/292G08G1/0116G08G1/0108G06T7/80G08G1/017G06T5/50G06V20/52G06V20/54Y02T10/40
Inventor 王伟唐心瑶宋焕生穆勃辰李聪亮梁浩翔张文涛雷琪刘莅辰戴喆云旭侯景严贾金明赵锋余宵雨靳静玺王滢暄崔子晨赵春辉
Owner CHANGAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products