Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Laser point cloud and image data fusion-based lane line extraction method

A laser point cloud and image data technology, which is applied in image data processing, image analysis, image enhancement, etc., can solve the problems that laser point cloud data cannot obtain lane line positioning, detection and extraction, single thinking, and roads cannot be separated.

Active Publication Date: 2017-12-12
WUHAN UNIV
View PDF3 Cites 81 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] On the use of digital image processing technology to extract lane lines, researches at home and abroad are emerging in an endless stream. Judging from the existing relatively mature extraction methods, most scholars use the threshold segmentation algorithm based on HIS color space and based on the improved Hough Transformed lane line detection algorithm, although this kind of method has been developed relatively maturely, in view of the complexity of the actual road conditions, it is not uncommon for lane lines to wear out, and the color of lane lines is also very easy to be confused with the road during the wear process , resulting in a low lane line detection rate, false detection, and missed detection.
[0004] In the area of ​​lane line detection using laser data, the existing research usually combines the echo reflectivity, scanning angle, and measurement distance of the laser point cloud to extract the attribute information of the road markings. Multiply the linear optimal fitting algorithm to fit the extracted marking point cloud, generate the CAD outline of the road marking, and realize the automatic recognition of the road marking. Although the above schemes have some preliminary feature recognition effects, but because of The idea followed in lane line detection is relatively single, or only focus on some ideal conditions, which makes the generality, accuracy and robustness of the algorithm far from enough
When applied to the actual complex and changeable road conditions, it is easy to be disturbed by unavoidable situations such as surrounding vehicle occlusion and lane line wear, and there may be some situations where the road cannot be separated due to the flatness along the way, making the simple The use of laser point cloud data often cannot obtain relatively fine and accurate lane line positioning and detection and extraction effects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Laser point cloud and image data fusion-based lane line extraction method
  • Laser point cloud and image data fusion-based lane line extraction method
  • Laser point cloud and image data fusion-based lane line extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The technical solutions of the present invention will be described in detail below with reference to the drawings and embodiments.

[0047] See attached Figure 4 The specific implementation process of lane line detection based on multi-source data fusion provided by the embodiment of the present invention is as follows:

[0048] Step 1: Data preprocessing. This step includes the preprocessing of the original laser point cloud (the purpose is to extract the road surface point cloud) and the image preprocessing. The image preprocessing also includes image segmentation, image denoising and image enhancement. The segmentation link needs to extract the interesting part of the image (that is, the lane line), which can be processed by using lidar data and the characteristics of the photo image itself.

[0049] Preprocessing of the original laser point cloud: First, find the trajectory points corresponding to each group of point clouds, use the trajectory points to remove the interf...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a laser point cloud and image data fusion-based lane line extraction method. The method comprises the following steps of 1, preprocessing an original laser point cloud, extracting a road surface point cloud, and preprocessing an original image to remove influences of noises, illumination and the like; 2, extracting a point cloud of a road boundary from the extracted road surface point cloud in the step 1, and by utilizing a principle that a distance between a lane line and the road boundary is constant, determining a point cloud position of the lane line; 3, registering a point cloud of the lane line obtained after processing in the step 2 and the preprocessed image, and roughly determining an approximate position of the lane line on the image; and 4, performing accurate lane line detection in an image region determined in the step 3. According to the method, the lane line is extracted more accurately and robustly by fully utilizing the advantages of point cloud data and image data.

Description

Technical field [0001] The invention belongs to the field of multi-source data fusion processing and pattern recognition, and relates to a lane line extraction method based on laser point cloud and image data fusion. Background technique [0002] Nowadays, the commonly used road-level map data can no longer meet the needs of some advanced driver assistance systems (ADAS) and smart cars. We need to use lane-level information to assist path planning and decision-making. Therefore, the extraction of lane lines is particularly important in the production of high-precision maps. [0003] In the use of digital image processing technology to extract lane lines, there are endless researches at home and abroad. From the current existing more mature extraction methods, scholars mostly use threshold segmentation algorithms based on HIS color space and improved Hough Lane line detection algorithms for changes. Although this type of method has been developed more maturely, in view of the compl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/40G06T7/13G06T7/11G06T7/80G06T7/30
CPCG06T7/11G06T7/13G06T7/30G06T7/80G06T2207/30256G06V20/588G06V10/30
Inventor 黄玉春范佳张丽姜文宇谢荣昌彭淑雯张童瑶
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products