Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Sparse point cloud multi-target tracking method fusing spatio-temporal information

A sparse point and point cloud technology, applied in the field of 3D vision, can solve problems such as low precision and insufficient information utilization

Active Publication Date: 2021-03-26
TSINGHUA UNIV
View PDF3 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to propose a sparse point cloud multi-target tracking method that integrates spatio-temporal information in order to overcome the problems of low accuracy and insufficient information utilization of the existing 3D multi-target tracking algorithm under sparse point cloud data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sparse point cloud multi-target tracking method fusing spatio-temporal information
  • Sparse point cloud multi-target tracking method fusing spatio-temporal information
  • Sparse point cloud multi-target tracking method fusing spatio-temporal information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] The present invention proposes a sparse point cloud multi-target tracking method that integrates spatio-temporal information, which will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0064] The present invention proposes a sparse point cloud multi-target tracking method that integrates spatio-temporal information. The overall process is as follows: figure 1 shown, including the following steps:

[0065] 1) Obtain the point cloud training set, use the point cloud training set to train the 3D point cloud trajectory segment prediction deep learning network, and obtain the trained 3D point cloud trajectory segment prediction deep learning network; the specific steps are as follows:

[0066] 1-1) Obtain the point cloud training set; the specific steps are as follows:

[0067] 1-1-1) Set up a lidar at any place around the edge of the fixed scene (it can be any model, this example uses Livox Mid-100 lidar), so t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a sparse point cloud multi-target tracking method fusing spatio-temporal information, and belongs to the field of 3D vision. According to the method, the point cloud feature extraction network is used as a trunk, multiple frames of point cloud data are input at the same time, time domain information fusion is carried out on extracted features, and therefore missing detectioncaused by point cloud sparseness is avoided. Benefited from the fusion of time-space information, tracking and detection tasks can be more tightly coupled, and detection frames of three frames beforeand after can be predicted at the same time to obtain a track segment of a current target for three continuous frames. Then a distance intersection ratio score of the current trajectory segment and the trajectory tracking result at the previous moment is calculated, and the currently split trajectory segment and the historical trajectory segment are matched and spliced by using a greedy algorithmto obtain a final trajectory tracking result at each moment. The method has the application potential of coping with multi-target tracking under sparse point clouds, has high robustness for target missing detection and false detection, and can still obtain a stable tracking result in sparse point cloud sequence input.

Description

technical field [0001] The invention relates to the field of 3D vision, in particular to a sparse point cloud multi-target tracking method for fusing spatio-temporal information. Background technique [0002] In recent years, the rapid development of autonomous driving, robotics and other fields has put forward an urgent need for high-precision target detection and tracking algorithms, so 3D target detection and tracking is becoming the most noteworthy research direction in computer vision. The detection and tracking task takes the raw data of the sensor as input, and outputs the accurate target position and tracking id, which is the basis of subsequent links such as path planning and an essential part of the entire system. Faced with the demand for precise three-dimensional positioning and tracking, depth cameras or multi-camera sensors have low precision, short positioning distances, and are greatly affected by light. LiDAR has the characteristics of long distance, high p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06K9/62
CPCG06T7/246G06T2207/10016G06T2207/10028G06T2207/20081G06T2207/20084G06T2207/30221G06T2207/30196G06T2207/30241G06F18/253
Inventor 冯建江周杰张猛
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products