[0049] The following is a detailed description of the embodiments of the present invention: this embodiment is implemented on the premise of the technical solution of the present invention, and provides detailed implementation methods and specific operation processes. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention, and these all belong to the protection scope of the present invention.
[0050] An embodiment of the present invention provides a semi-supervised target labeling method for 3D point cloud data, comprising the following steps:
[0051] Step 1, read the original 3D point cloud data of the current frame;
[0052]Step 2, preprocessing the original point cloud data read in step 1, specifically including: Region of Interest (ROI) selection and robust ground segmentation, so that the point data beyond the region of interest and ground point data are filtered out from the read original 3D point cloud data to obtain obstacle point cloud data containing only obstacle targets;
[0053] Step 3: Perform unsupervised target detection on the obstacle point cloud data obtained in step 2, including obstacle target clustering and obstacle frame construction: obstacle target clustering refers to dividing the obstacle point cloud data into several sub- Each subset is a set of all points belonging to an obstacle target; obstacle frame construction refers to using the frame construction algorithm to fit the most A suitable oriented 3D bounding box;
[0054] In this step: best fit refers to the best fitting estimate of the obstacle target orientation.
[0055] Step 4: If the sequence frame is labeled and there is a labeling result of the previous frame, then use the multi-target tracking algorithm to automatically predict the 3D bounding box of the obstacle target in the current frame based on the labeling result of the previous frame, and unsupervised target detection with step 3 The obtained obstacle target 3D bounding box is fused;
[0056] In this step: if the current frame is not a sequence frame, the 3D bounding box of the obstacle target obtained by S3 is directly used as the final labeling result. In this case, labeling is performed frame by frame, and each frame starts from zero Labeling; if the current frame is a sequence frame, the labeling results of the previous frame can be used to assist the labeling of the current frame.
[0057] Step 5: The user checks and adjusts the 3D bounding box of the fused obstacle target obtained in step 4, so as to obtain the final labeling result of the current frame and output and save it;
[0058] Step 6: The point cloud of the next frame becomes the current frame, and steps 1 to 5 are repeated, so as to achieve batch labeling of 3D point cloud data, avoiding complicated labeling operations and repeated labeling operations.
[0059] The embodiment of the present invention also provides a semi-supervised target labeling system for 3D point cloud data. The system includes: a data reading module 1, a visualization module 2, a preprocessing module 3, an obstacle frame automatic construction module 4, and an interaction module 5 , an image assistance module 6 and an annotation result output module 7 .
[0060] Data reading module 1: Responsible for inputting the required data into the system according to the specified format. The necessary data is 3D point cloud data. According to the actual functional requirements, it may also need: target category information data (for customizing the target category) , time stamp data and lidar coordinate system and world coordinate system conversion data (for the multi-target tracking link of unit 4), two-dimensional image data and lidar coordinate system and camera coordinate system conversion data (for unit 6 will mark The result is converted to a two-dimensional image), target labeling result data (used in unit 2 to display the labeling result of the current frame, that is, to display the obstacle target 3D bounding box of the current frame).
[0061] Visualization Module 2: Includes:
[0062] The first visualization unit is used to display the original three-dimensional point cloud data and target labeling result data;
[0063] The second visualization unit is used to display obstacle point cloud data;
[0064] The third visualization unit is used to display the automatic construction of the obstacle 3D bounding box;
[0065] The fourth visualization unit is used to display the 3D bounding box of the obstacle after manual interaction;
[0066] A fifth visualization unit for displaying the image and the 2D and 3D bounding boxes mapped onto the image;
[0067] Preprocessing module 3: the original point cloud data displayed by the first visualization unit is sequentially subjected to preprocessing operations of ROI selection and ground point cloud removal to obtain an obstacle point cloud containing only obstacle targets;
[0068] Obstacle border automatic construction module 4: Obtain the 3D bounding boxes of all possible obstacle targets in the obstacle point cloud after the obstacle point cloud data displayed by the second visualization unit is sequentially processed by obstacle target clustering and obstacle frame construction , if it is a sequence frame and there is an annotation result of the previous frame, the multi-target tracking algorithm can be used to transfer the annotation result of the previous frame to the current frame;
[0069] Interaction module 5: The user checks the automatically constructed 3D target bounding box displayed by the third visualization unit, and slightly adjusts it using input devices such as mouse and keyboard, and the length, width, height and position of the border can adapt to obstacles during adjustment object point cloud data; when an error occurs in the obstacle target clustering process displayed by the third visualization unit, the user can use the mouse to frame the point cloud in the screen area to manually construct the obstacle frame;
[0070] Image auxiliary module 6: For the final 3D target frame displayed by the fourth visualization unit, the 2D and 3D frame are obtained by converting it from the lidar coordinate system to the camera coordinate system, and drawn in the image;
[0071] Annotation result output module 7: output and store the final 3D object frame displayed by the fourth visualization unit in the form of a file according to a specified format.
[0072] The semi-supervised target labeling method and system for 3D point cloud data provided by the above embodiments of the present invention include: reading the original 3D point cloud data of the current frame; preprocessing the original 3D point cloud data to obtain a target containing only obstacles Obstacle point cloud data; unsupervised target detection on obstacle point cloud data to obtain obstacle target 3D bounding box; use multi-target tracking algorithm to automatically predict the obstacle target 3D boundary of the current frame based on the labeling results of the previous frame Frame, and fuse with the 3D bounding box of the obstacle target detected by unsupervised target detection; check and adjust the 3D bounding box of the fused obstacle target, so as to obtain the final 3D bounding box of the current frame, which is the final labeling result of the current frame. The method and system can well solve the problem of three-dimensional point cloud data labeling in the prior art and minimize the cost of labeling.
[0073] It should be noted that the steps in the method provided by the present invention can be realized by using the corresponding modules, devices, units, etc. in the system, and those skilled in the art can refer to the technical solution of the system to realize the steps of the method The procedure of the steps, that is, the embodiments in the system can be understood as a preferred example for implementing the method, and details will not be described here.
[0074] Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the specific embodiments described above, and those skilled in the art may make various changes or modifications within the scope of the claims, which do not affect the essence of the present invention.