Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Lane Line Extraction Method for Event Camera Based on Deep Learning

A deep learning and extraction method technology, applied in the field of image processing, can solve the problems of poor imaging quality and difficult lane line extraction, and achieve the effects of low delay, fast and accurate lane line curve fitting, and high dynamic range

Active Publication Date: 2022-03-15
WUHAN UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of the defects in the prior art, the purpose of the present invention is to provide a method for extracting lane lines of event cameras based on deep learning, which can better solve the problem of poor imaging quality caused by ordinary optical cameras in some harsh environments (such as tunnel entrances and exits). The problem that leads to the difficulty of lane line extraction, the proposed network based on the structure prior can extract the lane lines on the DVS image with high precision

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Lane Line Extraction Method for Event Camera Based on Deep Learning
  • A Lane Line Extraction Method for Event Camera Based on Deep Learning
  • A Lane Line Extraction Method for Event Camera Based on Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] In order to make the object, technical solution and effect of the present invention more clear and definite, the present invention will be further described in detail below with reference to the accompanying drawings.

[0028] The present invention provides a method for extracting lane lines of an event camera based on deep learning, comprising the following steps:

[0029] Step 1: Create an image frame from the event flow generated by DVS. The method of building a frame is generally to accumulate corresponding events in a period of time, and finally express it in a binary image, such as figure 1 shown.

[0030] Step 2: Send the generated DVS images and corresponding semantic labels into the network based on structural prior for supervised training: the network based on structural prior is composed of a base network and an omnidirectional slice convolution module, and the base network passes convolution Product and pooling are used to extract semantic information. The ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for extracting lane lines of an event camera based on deep learning. The present invention proposes a network based on structural priors, which can capture the spatial relationship between pixels well, especially the spatial relationship of objects that appear as elongated shapes, by utilizing the omni-directional slice convolution module. In order to further improve the accuracy of lane line extraction, the invention introduces a post-processing method based on model Carlo sampling and least squares polynomial fitting to fit the lane lines and finally complete the lane line extraction task.

Description

technical field [0001] The invention belongs to the technical field of image processing, in particular to a deep learning-based event camera lane line extraction method. Background technique [0002] Lane line extraction is a basic and important task in the field of automatic driving. In recent years, advanced lane line extraction methods have used deep learning models. These deep learning methods usually use RGB images, that is, ordinary optical camera images. However, ordinary optical images naturally have the disadvantages of motion blur and small dynamic range due to the imaging mechanism. In order to solve these problems, the invention introduces a dynamic vision sensor (DVS, an event camera) with the advantages of low latency and high dynamic range, and constructs a data set for lane line extraction tasks. In order to extract lane lines well on DVS images, the present invention proposes a network based on structure prior. The network can well capture the spatial rel...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/56G06V10/82G06N3/04
CPCG06V20/588G06N3/045
Inventor 杨文罗豪程文胜余磊徐芳
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products