Lane line segmentation method and apparatus

A lane line and rough segmentation technology, applied in the field of maps, can solve the problems of large error, easy to be affected by noise, inaccurate segmentation of lane lines, etc., to achieve the effect of accurate segmentation and improved accuracy

Active Publication Date: 2016-06-01
BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD
View PDF3 Cites 61 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The existing technology uses the information of a local area to calculate the image gradient, which is easily affected by noise. When there are problems such as shadows, occlusions, or blurred lane lines, the error is relatively large; for non-single lane lines (such as double lines, solid Dotted lines, etc.), unable to locate accurate lane lines
Therefore, the prior art has the defect that the segmented lane lines are imprecise

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lane line segmentation method and apparatus
  • Lane line segmentation method and apparatus
  • Lane line segmentation method and apparatus

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] figure 1 It is a flow chart of a lane line segmentation method provided by Embodiment 1 of the present invention. This embodiment is applicable to the situation where the lane line is segmented according to the located lane line image, and the method can be executed by the lane line segmentation device. The lane line dividing device can be integrated in a terminal such as a computer or a mobile terminal, and specifically includes the following:

[0026] S110, collecting lane marking images.

[0027] When the lane line is located, the lane line image is collected through the control camera.

[0028] S120, using a convolutional neural network to process the lane line image to obtain a rough segmentation result.

[0029] Among them, Convolutional Neural Network (CNN) is a feed-forward neural network whose artificial neurons can respond to surrounding units within a part of the coverage area, and have excellent performance for large-scale image processing. The spatial re...

Embodiment 2

[0045] Figure 5 It is a flow chart of a lane line segmentation method provided in Embodiment 2 of the present invention. In this embodiment, according to the rough segmentation result and the lane line image in Embodiment 1, constructing a graph cut model for the lane line image Carry out subdivision and determine that the lane line area has been optimized, including as follows:

[0046] S510, collecting lane line images.

[0047] S520. Process the lane line image by using a convolutional neural network to obtain a rough segmentation result.

[0048] S530. Determine an absolute foreground area, an absolute background area, and an uncertain area in the lane marking image according to the rough segmentation result.

[0049] This step is mainly to realize the initialization of the graph cut model, that is, by processing the rough segmentation result, point out which pixels in the lane line image belong to the absolute foreground area (lane line), and which pixels belong to the...

Embodiment 3

[0075] Figure 6 It is a flow chart of a lane line segmentation method provided by Embodiment 3 of the present invention. This embodiment optimizes Embodiment 1. On the basis of Embodiment 1, the method of verifying the edge points of the lane line area is added. The content, specifically includes the following:

[0076] S610. Collect images of lane lines.

[0077] S620. Process the lane line image by using a convolutional neural network to obtain a rough segmentation result.

[0078] S630. According to the rough segmentation result and the lane line image, construct a graph cut model to subdivide the lane line image, and determine a lane line area.

[0079] S640. Check the edge points of the lane line area to determine the outline of the lane line.

[0080] Use the RANSAC (RANdomSAmpleConsensus, Random Sampling Consensus) algorithm to verify the edge points of the lane line area, and eliminate the wild points, that is, the pixels judged to be the lane line area instead of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a lane line segmentation method and apparatus. The method comprises: a lane line image is collected; the lane line image is processed by using a convolution neural network to obtain a coarse segmentation result; and according to the coarse segmentation result and the lane line image, an image segmentation model is constructed to carry out subdivision on the lane line image, thereby determine a lane line area. According to the invention, when the image segmentation model is constructed, information of the whole lane line image is used and segmentation of the image segmentation model becomes precise, so that lane line segmentation precision is improved.

Description

technical field [0001] Embodiments of the present invention relate to map technology, and in particular to a method and device for segmenting lane lines. Background technique [0002] In natural scenes, accurate lane segmentation can help the generation of key traffic elements in high-precision maps, and is the main technology for autonomous driving and assisted driving. [0003] In the prior art, the segmentation of lane lines is generally based on the method of image processing. First, the lane line in the collected image is located, and the image gradient near the lane line is obtained, and the place with the strongest response is taken as the boundary of the lane line. . [0004] The existing technology uses the information of a local area to calculate the image gradient, which is easily affected by noise. When there are problems such as shadows, occlusions, or blurred lane lines, the error is relatively large; for non-single lane lines (such as double lines, solid Dot...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00
CPCG06T2207/30256G06T2207/20084
Inventor 何贝晏涛晏阳贾相飞
Owner BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products