Automatic driving lane information detection method based on radar point cloud and image fusion

An image fusion and automatic driving technology, which is applied in the directions of measuring devices, surveying and mapping and navigation, road network navigators, etc., can solve the problem of discontinuous radar point cloud, easy to cause false detection and missed detection, and it is difficult to meet the needs of unmanned driving tasks and other issues to achieve a good effect of robustness and detection success rate

Pending Publication Date: 2022-02-11
荆州智达电动汽车有限公司
0 Cites 2 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0003] In the traditional lane detection system, the normal operation of the image-based lane detection algorithm depends on good lighting conditions, and in the face of complex road environments such as vehicles blocking the lack of lane lines, the accuracy of the algorithm model is significantly reduced, and it is difficult to meet the requirements of unmanned vehicles. Driving task requirements; the lane line detect...
View more

Method used

[0026] (2) lane marking segmentation recognition. A lane segmentation recognition model for high-precision maps is adopted. The point cloud range is selected within a certain range (20m*20m*10m) centered on the lidar for interception, and the bird's-eye view of the point cloud is obtained as the model input. The segmentation model uses the improved U-Net network. The network model is mainly composed of an encoder layer and a decoder layer. The encoder layer uses a 7*7 convolution kernel instead of a conventional 3*3 convolution kernel, so that each volume The product layer contains a large range of information. Each encoder layer in the network performs two convolution operations and one maximum pooling operation. After the execution is completed, the encoder layer saves the convolution out...
View more

Abstract

The invention discloses an automatic driving lane information detection method based on radar point cloud and image fusion, and relates to the technical field of automatic driving perception. According to the method, more precise driving semantic information including a center line, a course angle, a stop line and the like is calculated through lane line information detected in real time. The method has an exception handling function, deals with exceptions such as lane line recognition errors and lane line loss, and prevents unknown errors in the running process of the vehicle. The unmanned vehicle can perceive the road semantic information of the surrounding road surface in real time by deploying the method, the vehicle is assisted to carry out a driving task, and the method has very good robustness and detection success rate.

Application Domain

Instruments for road network navigationCharacter and pattern recognition +1

Technology Topic

EngineeringReal-time computing +5

Image

  • Automatic driving lane information detection method based on radar point cloud and image fusion
  • Automatic driving lane information detection method based on radar point cloud and image fusion
  • Automatic driving lane information detection method based on radar point cloud and image fusion

Examples

  • Experimental program(1)

Example Embodiment

[0019] The above objects, features, and advantages of the present invention can be more obvious, and the following is incorporated Figure 1 ~ 4 And DETAILED DESCRIPTION OF THE INVENTION The present invention will be described in further detail.
[0020] The method of automatic driving lane information detecting based on radar point cloud and image fusion is passed. figure 1 The system architecture shown is implemented and passed figure 2 The data line shown is connected. The system includes a sensor group constructed by laser radar, camera, and inertial moiety, a core computing unit, and an embedded computing unit, 5G routing, and sensor group and core calculation. The unit, the embedded computing unit is connected to the HUB. The embedded computing unit in this implementation is used to prepare data. The core computing unit is used to run the model. In this embodiment, multiple laser radar can be employed, each laser radar being connected to the system via HUB.
[0021] In this embodiment, the automatic driving lane information detecting method based on radar point cloud and image fusion can realize the real-time construction of the vehicle line detection, lane identification identification, and driving high precision map, which is executed in the core computing unit, and specifically includes the following steps. :
[0022] Step 1, sensor data acquisition and processing.
[0023] First start the laser radar, camera, inertial sensor data receiving node, receive data acquired by each sensor, specifically, the point cloud data collected by the laser radar, the image data collected by the camera, is collected by the inertial sensor. Postage data. Each sensor data is sent to the corresponding data pre-processing module, the image data performs Gaussian filtering and adjusts the brightness saturation, and the laser point cloud data performs noise reduction processing, then the multi-sensor data is synchronized based on the timestamp, and after establishing a software node released Multi-sensor data.
[0024] Step 2, lane detection and lane identification identification.
[0025] (1) lane detection. The lane cable detection model uses the fused point cloud and image characteristics, with the HOURGLASS depth network extraction characteristics, the fused feature and then enter the output branch to perform the vehicle line detection. The specific process is the core computing unit first initialize the HOURGLASS network model, and then subscribes to the synchronous data node to prepare multiple sensor data after synchronization. When the node receives the data, the calculation unit runs the vehicle line detection model, and the detection segmentation result obtained after the model is issued on the ROS.
[0026] (2) Lane logo segmentation recognition. Use the lane line segmentation recognition model for high precision map. Point cloud range is selected in a certain range (20m * 20m * 10m) in a range (20m * 20m * 10m) to get a bimble anneal view as a model input. The segmentation model uses the improved U-NET network, and the network model consists mainly of the encoder layer and the decoder layer. The encoder layer uses the convolutionary core of 7 * 7 instead of the conventional 3 * 3 convolution, so that each volume Both the standard contains a wide range of information, each encoder layer in the network performs two convolution operations and one maximum poolization operation, and the encoder layer saves the convolution output result, and the output is transmitted to the output An encoder layer. In the decoder layer, each decoding layer performs the above-in-the-convolution result with the result of the convolution of the convolution to be combined with the result of the convolution of the encoder layer to be combined with the result of the convolution of the encoder layer. The feature is fed into the next decoder layer. The output of the final decoder layer performs convolution operation using the volume of 1 * 1, and then passes through the SoftMax layer to obtain the result of the last segmentation. Compared with the U-NET model, this improved model uses IOU (InterSection-Over-Union) as a loss function of the network, and can have a better effect in the divided lane identification task. The IOU loss function is defined as follows:
[0027]
[0028] Among them, LOSS IOU For the loss function, A is a network predicted lane marker point set, b is the set of labels in the split tag.
[0029] Similar to the lane cable detection, the actual segmentation process is first initialized for the core computing unit, and then subscribes to the synchronous data node to prepare the synchronous multi-sensor data. When the node receives the data, the core computing unit runs the vehicle identifier division model and the segmentation result obtained after the model is issued on the software.
[0030] Step 3 to get a high precision map.
[0031] like Figure 4 As shown, first segment each segment is collected, record the laser radar acquisition sensor data, divided into sequence 1, sequence 2, sequence N, each sequence representing the data collected by a laser radar, that is, the sequence 1 is the first laser radar collection Data, sequence 2 is data collected by the second laser radar, and so on. The data is then processed using the preprocessing algorithm, including filtering, point cloud noise. After processing, the data is used to use the planar fitting algorithm to obtain the road surface information and project the point cloud data to the plane. High-precision map module utilizes the lane identifier segmentation model on each point cloud aerial view, and obtains the road surface information, and superimposes the global point cloud with the lane identification information. Since the segmentation algorithm will generate some errors that cannot guarantee that the road semantic information of the high-precision map is accurate, it is manually repaired by manual calibration of the segmentation result, resulting in the sequence corresponding to the high-precision map. By performing the above operations by multiple sequences, the high-profile map can be obtained by projection to the result of the inertial collected to the world coordinate system.
[0032] Step 4, determine the detection result according to the lane line and the lane identifier detection identification result.
[0033] When receiving the lane cable detection and split data, the decision control module judges the road conditions through the lane identifier division results, and the lane information is calculated through the lane line information in the front view. The final decision control module calculates the speed and steering parameters of the vehicle on this basis, and is controlled by the CAN bus to the bottom layer of the vehicle.
[0034] As described above, only the specific embodiments of the present invention, but the scope of the invention is not limited thereto, and any technicress, those skilled in the art, can easily think of change or replacement within the scope of the present invention, It should be covered within the scope of the invention. Therefore, the scope of the invention should be based on the scope of protection of the claims.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products