Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Intelligent vehicle lane line semantic segmentation method and system

A semantic segmentation, intelligent vehicle technology, applied in the field of computer vision, can solve the problems of poor robustness, low accuracy of lane line detection, wrong classification of image pixels, semantic segmentation technology, etc., to achieve the effect of improving accuracy

Pending Publication Date: 2020-10-30
SHANGHAI JIAO TONG UNIV
View PDF1 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although the neural network of the encoder-decoder architecture can solve the problem of image pixel classification to a certain extent, the error of image pixel classification is a major problem that hinders the application of semantic segmentation technology to the field of intelligent vehicles.
[0006] Patent document CN109766878A (application number: 201910287099.0) discloses a method and equipment for lane line detection, which relates to the field of automatic driving technology and is used to solve the problems of low accuracy and poor robustness of lane line detection. Patent document CN109766878A It is disclosed that the maximum height value of the grid in the bird's-eye view feature map, the average reflection intensity and the statistical density of the point cloud number are used as the input of darknet for feature extraction; the lane line points are determined by FPN fusion of high-resolution low-level features and high-level semantic information of high-level features The feature information; according to the feature information, determine the lane line point corresponding to the lane line point in the bird's-eye view feature map in the point cloud map; the lane line point in the lane line point in the point cloud map whose reflection intensity is greater than the average reflection intensity threshold As the feature points of the lane line, and according to the feature points of the lane line in the point cloud map, the geometric model is fitted to determine the lane line

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Intelligent vehicle lane line semantic segmentation method and system
  • Intelligent vehicle lane line semantic segmentation method and system
  • Intelligent vehicle lane line semantic segmentation method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] A kind of method for semantic segmentation of intelligent vehicle lane line provided according to the present invention, comprising:

[0045] Step M1: Select a frame of image in the video collected by the smart vehicle during driving as the input image; perform corresponding perspective transformation on the input image to obtain a road surface image with a suitable angle;

[0046] Step M2: Input the road surface image after the perspective transformation to the feature extraction network for feature extraction, and obtain the feature extraction map of the first layer and the feature extraction map of the last layer of the feature extraction network;

[0047] Step M3: performing a quadruple upsampling operation on the feature extraction map of the last layer, merging it with the feature extraction map of the first layer and performing a convolution operation to obtain the convolved image information;

[0048] Step M4: Perform a quadruple upsampling operation on the feat...

Embodiment 2

[0069] Embodiment 2 is a variation of embodiment 1

[0070] The invention provides a method for semantic segmentation of lane lines of intelligent vehicles, which fully utilizes the importance and redundancy of the feature information of the first layer and the last layer of the feature extraction network to improve the accuracy of semantic segmentation of lane lines. This implementation takes the semantic segmentation of a single image as an example, such as Figure 1-4 As shown, the described semantic segmentation method comprises the following steps:

[0071] Step 1: Take a frame of picture in the video collected during the driving of the smart vehicle as the original image, and perform perspective transformation on it to obtain the input image focusing on road surface information. Resizes the image during the perspective transformation process on the original image.

[0072] Step 2: Input the perspective transformation image obtained in step 1 into the feature extraction...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an intelligent vehicle lane line semantic segmentation method and system. The method comprises the steps: selecting a frame of picture in a video collected by an intelligent vehicle in the driving process as an input picture; inputting the input picture into a feature extraction network for feature extraction, and obtaining a first-layer feature extraction graph and a last-layer feature extraction graph of the feature extraction network; performing preset times of up-sampling operation on the last layer of feature extraction image, combining the last layer of feature extraction image with the first layer of feature extraction image, and performing convolution operation to obtain convolved image information; performing preset-time up-sampling operation on the featureextraction image output by the last layer, combining the feature extraction image output by the last layer with the first-layer feature extraction image and the convolved image information, and performing convolution operation to obtain image information after second convolution; and performing preset times of up-sampling operation on the picture data after the second convolution, and outputting asemantic segmentation image of the lane line. Existing hardware resources are fully utilized, and hardware expenditure is not increased while the prediction accuracy of the image pixel points is guaranteed.

Description

technical field [0001] The present invention relates to the field of computer vision, in particular, to a method and system for semantic segmentation of lane lines of intelligent vehicles, and more specifically, to a method for realizing semantic segmentation of lane lines based on redundant feature extraction information. Background technique [0002] Lane line semantic segmentation is the application of computer vision technology in the field of intelligent vehicles, which assists intelligent vehicles to identify lane lines and build and update high-precision maps during driving. [0003] The core of the lane line semantic segmentation technology is to realize the classification of each pixel on the picture for a frame of picture collected by the intelligent vehicle during driving. [0004] The success of convolution operations in feature extraction has promoted the application of convolutional neural networks in the field of semantic segmentation. The neural network of t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06N3/04
CPCG06V20/588G06V10/267G06N3/045
Inventor 刘冶张希
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products