Semi-supervised target labeling method and system for three-dimensional point cloud data

A three-dimensional point cloud and point cloud data technology, applied in the computer field, can solve the problems of large three-dimensional point cloud data, increase user burden, complex labeling operations, etc., and achieve the effects of improving labeling efficiency, reducing labeling costs, and intuitive labeling process.

Pending Publication Date: 2020-01-24
SHANGHAI JIAO TONG UNIV
6 Cites 14 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, deep learning solutions require large volumes of annotated 3D point cloud data
[0003] Labeling 3D point cloud data is currently facing great challenges, which are mainly summarized in two aspects. The first aspect is the complex labeling operation. Labeling 3D bounding boxes in 3D point clouds is more difficult than labeling 2D bounding boxes in 2D images. It is much more complicated, because the three-dimensional coordinates, length, width, height, and orientation of the 3D bounding box need to be considered at the same time; the second aspect is repeated labeling operations. The point cloud data collected by LiDAR is usually provided in the form of sequence frames. There are differences but there will be a high degree of data association. If each frame is marked from scratch, it will lead to a large number of repeated labeling op...
View more

Abstract

The invention provides a semi-supervised target labeling method for three-dimensional point cloud data. The semi-supervised target labeling method comprises the following steps: reading original three-dimensional point cloud data of a current frame; preprocessing the original three-dimensional point cloud data to obtain obstacle point cloud data only containing an obstacle target; performing unsupervised target detection on the obstacle point cloud data to obtain an obstacle target 3D bounding box; using a multi-target tracking algorithm to automatically predict an obstacle target 3D boundingbox of the current frame according to the labeling result of the previous frame, and fusing the obstacle target 3D bounding box with an obstacle target 3D bounding box of unsupervised target detection; and checking and adjusting the fused 3D border of the obstacle target to obtain a final 3D bounding box of the current frame, i.e., a final labeling result of the current frame. The invention further provides a semi-supervised target labeling system for the three-dimensional point cloud data. According to the method, the problem of three-dimensional point cloud data annotation in the prior art can be well solved, and the annotation cost is reduced to the maximum extent.

Application Domain

Image enhancementImage analysis +1

Technology Topic

Data sciencePoint cloud +5

Image

  • Semi-supervised target labeling method and system for three-dimensional point cloud data
  • Semi-supervised target labeling method and system for three-dimensional point cloud data

Examples

  • Experimental program(1)

Example Embodiment

[0049] The following is a detailed description of the embodiments of the present invention: this embodiment is implemented on the premise of the technical solution of the present invention, and provides detailed implementation methods and specific operation processes. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention, and these all belong to the protection scope of the present invention.
[0050] An embodiment of the present invention provides a semi-supervised target labeling method for 3D point cloud data, comprising the following steps:
[0051] Step 1, read the original 3D point cloud data of the current frame;
[0052]Step 2, preprocessing the original point cloud data read in step 1, specifically including: Region of Interest (ROI) selection and robust ground segmentation, so that the point data beyond the region of interest and ground point data are filtered out from the read original 3D point cloud data to obtain obstacle point cloud data containing only obstacle targets;
[0053] Step 3: Perform unsupervised target detection on the obstacle point cloud data obtained in step 2, including obstacle target clustering and obstacle frame construction: obstacle target clustering refers to dividing the obstacle point cloud data into several sub- Each subset is a set of all points belonging to an obstacle target; obstacle frame construction refers to using the frame construction algorithm to fit the most A suitable oriented 3D bounding box;
[0054] In this step: best fit refers to the best fitting estimate of the obstacle target orientation.
[0055] Step 4: If the sequence frame is labeled and there is a labeling result of the previous frame, then use the multi-target tracking algorithm to automatically predict the 3D bounding box of the obstacle target in the current frame based on the labeling result of the previous frame, and unsupervised target detection with step 3 The obtained obstacle target 3D bounding box is fused;
[0056] In this step: if the current frame is not a sequence frame, the 3D bounding box of the obstacle target obtained by S3 is directly used as the final labeling result. In this case, labeling is performed frame by frame, and each frame starts from zero Labeling; if the current frame is a sequence frame, the labeling results of the previous frame can be used to assist the labeling of the current frame.
[0057] Step 5: The user checks and adjusts the 3D bounding box of the fused obstacle target obtained in step 4, so as to obtain the final labeling result of the current frame and output and save it;
[0058] Step 6: The point cloud of the next frame becomes the current frame, and steps 1 to 5 are repeated, so as to achieve batch labeling of 3D point cloud data, avoiding complicated labeling operations and repeated labeling operations.
[0059] The embodiment of the present invention also provides a semi-supervised target labeling system for 3D point cloud data. The system includes: a data reading module 1, a visualization module 2, a preprocessing module 3, an obstacle frame automatic construction module 4, and an interaction module 5 , an image assistance module 6 and an annotation result output module 7 .
[0060] Data reading module 1: Responsible for inputting the required data into the system according to the specified format. The necessary data is 3D point cloud data. According to the actual functional requirements, it may also need: target category information data (for customizing the target category) , time stamp data and lidar coordinate system and world coordinate system conversion data (for the multi-target tracking link of unit 4), two-dimensional image data and lidar coordinate system and camera coordinate system conversion data (for unit 6 will mark The result is converted to a two-dimensional image), target labeling result data (used in unit 2 to display the labeling result of the current frame, that is, to display the obstacle target 3D bounding box of the current frame).
[0061] Visualization Module 2: Includes:
[0062] The first visualization unit is used to display the original three-dimensional point cloud data and target labeling result data;
[0063] The second visualization unit is used to display obstacle point cloud data;
[0064] The third visualization unit is used to display the automatic construction of the obstacle 3D bounding box;
[0065] The fourth visualization unit is used to display the 3D bounding box of the obstacle after manual interaction;
[0066] A fifth visualization unit for displaying the image and the 2D and 3D bounding boxes mapped onto the image;
[0067] Preprocessing module 3: the original point cloud data displayed by the first visualization unit is sequentially subjected to preprocessing operations of ROI selection and ground point cloud removal to obtain an obstacle point cloud containing only obstacle targets;
[0068] Obstacle border automatic construction module 4: Obtain the 3D bounding boxes of all possible obstacle targets in the obstacle point cloud after the obstacle point cloud data displayed by the second visualization unit is sequentially processed by obstacle target clustering and obstacle frame construction , if it is a sequence frame and there is an annotation result of the previous frame, the multi-target tracking algorithm can be used to transfer the annotation result of the previous frame to the current frame;
[0069] Interaction module 5: The user checks the automatically constructed 3D target bounding box displayed by the third visualization unit, and slightly adjusts it using input devices such as mouse and keyboard, and the length, width, height and position of the border can adapt to obstacles during adjustment object point cloud data; when an error occurs in the obstacle target clustering process displayed by the third visualization unit, the user can use the mouse to frame the point cloud in the screen area to manually construct the obstacle frame;
[0070] Image auxiliary module 6: For the final 3D target frame displayed by the fourth visualization unit, the 2D and 3D frame are obtained by converting it from the lidar coordinate system to the camera coordinate system, and drawn in the image;
[0071] Annotation result output module 7: output and store the final 3D object frame displayed by the fourth visualization unit in the form of a file according to a specified format.
[0072] The semi-supervised target labeling method and system for 3D point cloud data provided by the above embodiments of the present invention include: reading the original 3D point cloud data of the current frame; preprocessing the original 3D point cloud data to obtain a target containing only obstacles Obstacle point cloud data; unsupervised target detection on obstacle point cloud data to obtain obstacle target 3D bounding box; use multi-target tracking algorithm to automatically predict the obstacle target 3D boundary of the current frame based on the labeling results of the previous frame Frame, and fuse with the 3D bounding box of the obstacle target detected by unsupervised target detection; check and adjust the 3D bounding box of the fused obstacle target, so as to obtain the final 3D bounding box of the current frame, which is the final labeling result of the current frame. The method and system can well solve the problem of three-dimensional point cloud data labeling in the prior art and minimize the cost of labeling.
[0073] It should be noted that the steps in the method provided by the present invention can be realized by using the corresponding modules, devices, units, etc. in the system, and those skilled in the art can refer to the technical solution of the system to realize the steps of the method The procedure of the steps, that is, the embodiments in the system can be understood as a preferred example for implementing the method, and details will not be described here.
[0074] Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the specific embodiments described above, and those skilled in the art may make various changes or modifications within the scope of the claims, which do not affect the essence of the present invention.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Video question and answer data set labeling method and device, storage medium and electronic equipment

ActiveCN114707022AImprove labeling efficiencyAvoid Annotation Quality Issues
Owner:ZHEJIANG UNIV

Image annotation method and device

InactiveCN110751663ASimplify annotation operationsImprove labeling efficiency
Owner:北京云聚智慧科技有限公司

Audio annotation method and device and electronic equipment

PendingCN114299985AImprove labeling efficiency
Owner:SOUNDAI TECH CO LTD

Association relation determination method and device, equipment and storage medium

PendingCN114821403AImprove labeling efficiencyimprove accuracy
Owner:商汤人工智能研究中心(深圳)有限公司

Image labeling method, classification model training method and computer equipment

PendingCN113971741AImprove labeling efficiencyImprove efficiency
Owner:TCL CORPORATION

Classification and recommendation of technical efficacy words

  • Improve labeling efficiency

Image annotation method and device, image semantic segmentation method and device and model training method and device

PendingCN112734775AReduce the amount of labelingImprove labeling efficiency
Owner:TENCENT TECH (SHENZHEN) CO LTD

Automatic labeling device

InactiveCN110550289AImprove labeling efficiencyeasy to operate
Owner:XIAN LANXIN IND AUTOMATION ENG CO LTD

Drawing marking method and system

PendingCN109325214AImprove labeling efficiencyReduce human labeling errors
Owner:WUCHANG SHIPBUILDING IND

Automatic lens double-face labeling machine

PendingCN107892051AImprove labeling efficiencyImprove applicability
Owner:SUZHOU SIPULANDI ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products