[0019] The above objects, features, and advantages of the present invention can be more obvious, and the following is incorporated Figure 1 ~ 4 And DETAILED DESCRIPTION OF THE INVENTION The present invention will be described in further detail.
[0020] The method of automatic driving lane information detecting based on radar point cloud and image fusion is passed. figure 1 The system architecture shown is implemented and passed figure 2 The data line shown is connected. The system includes a sensor group constructed by laser radar, camera, and inertial moiety, a core computing unit, and an embedded computing unit, 5G routing, and sensor group and core calculation. The unit, the embedded computing unit is connected to the HUB. The embedded computing unit in this implementation is used to prepare data. The core computing unit is used to run the model. In this embodiment, multiple laser radar can be employed, each laser radar being connected to the system via HUB.
[0021] In this embodiment, the automatic driving lane information detecting method based on radar point cloud and image fusion can realize the real-time construction of the vehicle line detection, lane identification identification, and driving high precision map, which is executed in the core computing unit, and specifically includes the following steps. :
[0022] Step 1, sensor data acquisition and processing.
[0023] First start the laser radar, camera, inertial sensor data receiving node, receive data acquired by each sensor, specifically, the point cloud data collected by the laser radar, the image data collected by the camera, is collected by the inertial sensor. Postage data. Each sensor data is sent to the corresponding data pre-processing module, the image data performs Gaussian filtering and adjusts the brightness saturation, and the laser point cloud data performs noise reduction processing, then the multi-sensor data is synchronized based on the timestamp, and after establishing a software node released Multi-sensor data.
[0024] Step 2, lane detection and lane identification identification.
[0025] (1) lane detection. The lane cable detection model uses the fused point cloud and image characteristics, with the HOURGLASS depth network extraction characteristics, the fused feature and then enter the output branch to perform the vehicle line detection. The specific process is the core computing unit first initialize the HOURGLASS network model, and then subscribes to the synchronous data node to prepare multiple sensor data after synchronization. When the node receives the data, the calculation unit runs the vehicle line detection model, and the detection segmentation result obtained after the model is issued on the ROS.
[0026] (2) Lane logo segmentation recognition. Use the lane line segmentation recognition model for high precision map. Point cloud range is selected in a certain range (20m * 20m * 10m) in a range (20m * 20m * 10m) to get a bimble anneal view as a model input. The segmentation model uses the improved U-NET network, and the network model consists mainly of the encoder layer and the decoder layer. The encoder layer uses the convolutionary core of 7 * 7 instead of the conventional 3 * 3 convolution, so that each volume Both the standard contains a wide range of information, each encoder layer in the network performs two convolution operations and one maximum poolization operation, and the encoder layer saves the convolution output result, and the output is transmitted to the output An encoder layer. In the decoder layer, each decoding layer performs the above-in-the-convolution result with the result of the convolution of the convolution to be combined with the result of the convolution of the encoder layer to be combined with the result of the convolution of the encoder layer. The feature is fed into the next decoder layer. The output of the final decoder layer performs convolution operation using the volume of 1 * 1, and then passes through the SoftMax layer to obtain the result of the last segmentation. Compared with the U-NET model, this improved model uses IOU (InterSection-Over-Union) as a loss function of the network, and can have a better effect in the divided lane identification task. The IOU loss function is defined as follows:
[0027]
[0028] Among them, LOSS IOU For the loss function, A is a network predicted lane marker point set, b is the set of labels in the split tag.
[0029] Similar to the lane cable detection, the actual segmentation process is first initialized for the core computing unit, and then subscribes to the synchronous data node to prepare the synchronous multi-sensor data. When the node receives the data, the core computing unit runs the vehicle identifier division model and the segmentation result obtained after the model is issued on the software.
[0030] Step 3 to get a high precision map.
[0031] like Figure 4 As shown, first segment each segment is collected, record the laser radar acquisition sensor data, divided into sequence 1, sequence 2, sequence N, each sequence representing the data collected by a laser radar, that is, the sequence 1 is the first laser radar collection Data, sequence 2 is data collected by the second laser radar, and so on. The data is then processed using the preprocessing algorithm, including filtering, point cloud noise. After processing, the data is used to use the planar fitting algorithm to obtain the road surface information and project the point cloud data to the plane. High-precision map module utilizes the lane identifier segmentation model on each point cloud aerial view, and obtains the road surface information, and superimposes the global point cloud with the lane identification information. Since the segmentation algorithm will generate some errors that cannot guarantee that the road semantic information of the high-precision map is accurate, it is manually repaired by manual calibration of the segmentation result, resulting in the sequence corresponding to the high-precision map. By performing the above operations by multiple sequences, the high-profile map can be obtained by projection to the result of the inertial collected to the world coordinate system.
[0032] Step 4, determine the detection result according to the lane line and the lane identifier detection identification result.
[0033] When receiving the lane cable detection and split data, the decision control module judges the road conditions through the lane identifier division results, and the lane information is calculated through the lane line information in the front view. The final decision control module calculates the speed and steering parameters of the vehicle on this basis, and is controlled by the CAN bus to the bottom layer of the vehicle.
[0034] As described above, only the specific embodiments of the present invention, but the scope of the invention is not limited thereto, and any technicress, those skilled in the art, can easily think of change or replacement within the scope of the present invention, It should be covered within the scope of the invention. Therefore, the scope of the invention should be based on the scope of protection of the claims.