Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Deep learning-based lane line detection and vehicle transverse positioning method

A technology for lane line detection and lateral positioning, applied in neural learning methods, character and pattern recognition, instruments, etc., can solve the problems of poor robustness, high demand, and difficulty in meeting the real-time requirements of automatic driving, and reduce the parameters of operation. volume and speed

Active Publication Date: 2021-08-27
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF4 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method mainly relies on manual selection of features, and the effect is not good when the lane line is occluded, missing, and illumination changes, and the robustness is poor.
[0004] The method based on deep learning relies on big data. The model obtains the characteristics of lane lines through autonomous learning. Learning has a high demand for computer hardware, and the obtained lane line features need to be clustered, fitted and other post-processing operations to obtain useful lane line parameters, which takes a long time and is difficult to meet the real-time requirements of automatic driving

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning-based lane line detection and vehicle transverse positioning method
  • Deep learning-based lane line detection and vehicle transverse positioning method
  • Deep learning-based lane line detection and vehicle transverse positioning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail:

[0039] This invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, components are exaggerated for clarity.

[0040] The data used in the concrete experiment of the present invention comes from Tusimple data set, has included 6408 images with label, and this data set carries out the label of lane line by the coordinate of a series of points, divides the height equally on the image, The ordinate values ​​of the lane lines are generated, and the abscissa of each lane line is generated according to these ordinate values.

[0041] like figure 1 As shown, the present invention discloses a method ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a lane line detection and vehicle transverse positioning method based on deep learning, and the method comprises the steps: firstly training a deep learning network, and then obtaining the parameters of each lane line in an image; and finally, according to the parameters of each lane line, obtaining the transverse positioning information (the serial number of the lane where the vehicle is located and the distance between the vehicle and the left and right lane lines of the lane where the vehicle is located) of the vehicle. According to the method, the advantage of deep learning image feature extraction is exerted, a post-processing module which consumes long time is omitted, and the parameters of the lane lines in the image and the transverse positioning information of the vehicle can be predicted more accurately and quickly.

Description

technical field [0001] The present invention relates to the technical field of automatic driving, in particular to a deep learning-based lane line detection and vehicle lateral positioning method. Background technique [0002] As the input of the automatic driving decision planning module, the lateral positioning information has a very important impact on the safety of automatic driving, and the result of lane line detection directly affects the accuracy of the lateral positioning of automatic driving. At present, lane line detection is mainly divided into two schemes: methods based on traditional image processing and methods based on deep learning. [0003] The main steps of the method based on traditional image processing are: image preprocessing, filtering the interference items in the image, manually selecting features according to the difference between the lane line pixel features and surrounding pixel features, extracting the feature information of the lane lines, and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/04G06N3/08G06V20/588G06F18/25G06F18/214
Inventor 李立君张艳磊郑康诚苏洋
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products