Lane line detection method based on deep learning

A lane line detection, deep learning technology, applied in the direction of instruments, character and pattern recognition, computer parts, etc., can solve the problems of time-consuming detection accuracy and other problems, achieve the effect of overcoming time-consuming, accurate and rapid detection, and overcome the accuracy of detection low effect

Active Publication Date: 2019-10-22
BEIJING INFORMATION SCI & TECH UNIV
View PDF15 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to solve the problem of time-consuming and low detection accurac

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lane line detection method based on deep learning
  • Lane line detection method based on deep learning
  • Lane line detection method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0018] Specific implementation mode one: as figure 1 As shown, the lane line detection method based on deep learning described in this embodiment comprises the following steps:

[0019] Step 1. Randomly select M images from the TuSimple dataset, and mark the lane lines contained in the selected M images to obtain the marked image;

[0020] Step 2. Input the marked image obtained in step 1 into the fully convolutional neural network FCN8s, use the input image to train the fully convolutional neural network FCN8s, stop the training until the loss function value no longer decreases, and obtain the trained full convolution neural network FCN8s;

[0021] The network structure of FCN can be divided into FCN-32s, FCN-16s, FCN-8s according to finally returning to the multiple of the input image size, select FCN-8s among the present invention;

[0022] Step 3, input the image to be detected by the lane line into the fully convolutional neural network FCN8s trained in step 2, and obta...

Example Embodiment

[0037] Specific embodiment 2: The difference between this embodiment and specific embodiment 1 is that the loss function used in the step 2 is a weighted cross entropy loss function (Weighted Cross Entropy Loss), for any pixel in the image to be tested, The true category of the pixel is y (y=1 means that the pixel is a lane line point, otherwise the pixel point is a non-lane line point), and the probability that the pixel is predicted to be class y is p, then the pixel’s cross entropy loss value WCE (p,y) is:

[0038] WCE(p,y)=-α t log(p t )

[0039] in: alpha t Represents the weight coefficient;

[0040] Add the cross-entropy loss values ​​of all pixels in the image to be tested to obtain the total cross-entropy loss value;

[0041] Stop training until the total cross-entropy loss value no longer decreases.

[0042] The difference between the loss function of this embodiment and the standard cross-entropy loss function is that there is an additional item α t parameter...

Example Embodiment

[0047] Embodiment 3: The difference between this embodiment and Embodiment 1 is that the loss function used in the second step is Focal Loss (Lin T Y, Goyal P, Girshick R, et al. Focal Loss for DenseObject Detection[J] .IEEE Transactions on Pattern Analysis&Machine Intelligence, 2017, PP(99):2999-3007), for any pixel in the image to be tested, the true category of the pixel is y, and the probability of the pixel predicted as category y is p, then the The loss value FL(p,y) of the pixel is:

[0048] FL(p,y)=-α t (1-p t ) γ log(p t )

[0049] in: alpha t and γ both represent weight coefficients;

[0050] Add the loss values ​​of all pixels in the image to be tested to obtain the total loss value;

[0051] Stop training until the total loss value no longer decreases.

[0052] The loss function of this embodiment is multiplied on the basis of the weighted cross-entropy loss function by (1-p t ) γ , which can balance the difference between easy-to-classify sample points...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a lane line detection method based on deep learning, and belongs to the technical field of lane line detection. The problems that a traditional lane line detection method consumes time and is low in detection precision are solved. The method comprises the following steps: firstly, regarding a task of detecting a lane line as a pixel-level semantic segmentation problem; lanelines and a background are divided through a full convolutional neural network FCN8s. According to the method, video detection reaches an average of 50 frames per second. The detection precision canreach 92.3%, and an accurate and rapid detection effect can be achieved. The method can be applied to the technical field of lane line detection.

Description

technical field [0001] The invention belongs to the technical field of lane line detection, and in particular relates to a lane line detection method based on deep learning. Background technique [0002] Autonomous driving has great potential in alleviating traffic congestion, reducing traffic accidents, and improving road and vehicle utilization, and has become a competitive hotspot for many companies. Autonomous driving uses modern sensing technology, information and communication technology, automatic control technology, computer technology, artificial intelligence and other technologies in a concentrated manner. It represents the strategic commanding heights of future automotive technology, is the key to the transformation and upgrading of the automotive industry, and is currently recognized worldwide. direction. Among them, lane line detection is the core technology of automatic driving. Traditional lane line detection methods are mostly manual feature extraction or de...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/38G06K9/62
CPCG06V20/588G06V10/28G06F18/23
Inventor 王超付子昂
Owner BEIJING INFORMATION SCI & TECH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products