Lane line detection method and system

A technology of lane line detection and lane line, which is applied in the field of lane line detection methods and systems, can solve the problems of severe performance degradation, inefficient encoder, and the number of lane lines is not fixed, etc., to achieve rich semantic features, improve positioning accuracy, and enrich The effect of global semantic information

Pending Publication Date: 2021-06-01
中汽创智科技有限公司
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, the most advanced lane line detection methods in the industry are based on CNN. For example, SCNN and SAD networks regard lane line detection as a semantic segmentation task, which has a heavy encoding and decoding structure. However, this method usually uses a small image as input, which makes it difficult to accurately predict the far end of the curved lane line; in addition, the above methods are usually limited to the detection of a predefined number of lane lines, for the lane line detection task on the actual road, in the image The scenes are different, and the number of lane lines is not fixed; for this PointLaneNet follows the strategy based on the candidate area, by generating multiple candidate lines in the image, thus getting rid of the inefficient encoder and the number of predefined lanes limits
[0004] The existing ResNet122 is used as the backbone network to extract semantic features, the input image is passed through the backbone to obtain the corresponding feature map, and then each grid on the feature map is regarded as an anchor, and the anchor predicts the lane line corresponding to the grid. After that, the NMS algorithm is used to retain the lane lines with high confidence, and filter out the redundant candidate lane lines; and then through the end-to-end training, the output of the entire lane line can be directly obtained, and at the same time, the output of the fixed number of lane lines can be got rid of However, using the anchor to predict the entire lane line passing through its corresponding grid on the obtained feature map makes the performance drop severely when predicting the far end of the curve; due to the limitation of the convolution kernel scale, the feature map obtained by the above scheme Only local area information can be captured, and long-distance and short-distance context information cannot be captured at the same time

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Lane line detection method and system
  • Lane line detection method and system
  • Lane line detection method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0148] According to Embodiment 1, a lane line detection method is provided, the method comprising:

[0149] Build a convolutional neural network model;

[0150] Build training and inference phases through network models;

[0151] Adjust the preprocessing of the training phase;

[0152] The predicted lane lines are obtained through the inference stage.

Embodiment 2

[0154] On the basis of Embodiment 1, the neural network model is constructed as follows:

[0155] The neural network model performs feature extraction on the convolutional feature map;

[0156] The convolutional feature map is connected to the three branches with a 1x1 convolutional layer;

[0157] The 1x1 convolutional layer compresses the number of channels of the feature map respectively;

[0158] The compressed data transposes the first branch and performs matrix multiplication with the second branch to obtain the attention feature map;

[0159] Pass the attention feature map through softmax to obtain the normalized attention feature map;

[0160] Matrix multiply the third branch of the compressed data with the normalized attention feature map to obtain a weighted attention feature map;

[0161] Use 1X1 convolution to increase the dimension of the above feature map channels and output the self-attention feature map.

Embodiment 3

[0163] On the basis of Embodiment 2, the attention structure is obtained according to the neural network model, and the attention structure is specifically as follows:

[0164] It is assumed that the size of the feature map obtained after passing through the backbone network is N*C*H*W,

[0165] Afterwards, the three branches all use 1X1 convolution to compress the number of channels of the feature map respectively.

[0166] The size of the compressed feature map is N*C / 8*(H*W);

[0167] After that, the feature map Q of the second branch is passed through the transpose function to obtain Q', and its size is N*(H*W)*C / 8;

[0168] Then perform matrix multiplication operation on Q' and R to obtain M, and then use Softmax to normalize M to obtain M', whose size is N*(H*W)*(H*W);

[0169] Characterize the extracted global semantic information; then perform a matrix multiplication operation on P and M' to obtain a feature map O whose size is N*C / 8*(H*W);

[0170] Finally, the 1X1...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a lane line detection method and system, and the method specifically comprises the steps: carrying out the modeling of a lane detection task, building a convolutional neural network model, and carrying out the feature extraction of a convolutional feature graph; in the training stage, collecting lane line training sample images, increasing the diversity of training samples through preprocessing, and further obtaining convergent network model parameters through iterative training; in a reasoning stage, post-processing a reasoning result to obtain a lane line predicted by the model; according to the invention, the mechanism is introduced into the network structure, the semantic features of the extracted feature map are richer through the fusion of local and global information, and the positioning precision of the far end of the lane line is further improved by using the network model trained by the structure.

Description

technical field [0001] The invention relates to a lane line detection technology, in particular to a lane line detection method and system. Background technique [0002] Lane lines, as an important part of road markings, can effectively guide intelligent vehicles to drive within the constrained road structure area. Real-time detection of road lane lines is an important link in the intelligent vehicle assisted driving system, which helps to assist in path planning , Carry out functions such as road deviation warning, and can provide reference objects for positioning and navigation. [0003] At present, the most advanced lane line detection methods in the industry are based on CNN. For example, SCNN and SAD networks regard lane line detection as a semantic segmentation task, which has a heavy encoding and decoding structure. However, this method usually uses a small image as input, which makes it difficult to accurately predict the far end of the curved lane line; in addition...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/04G06N3/08G06N3/084G06V20/588
Inventor 李丰军周剑光
Owner 中汽创智科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products