Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Height information-based unmanned vehicle lane scene segmentation method

A technology of scene segmentation and height information, applied to instruments, character and pattern recognition, computer components, etc., can solve problems such as excessive noise, unclear boundaries between non-road areas and road areas, and achieve the effect of reducing noise

Active Publication Date: 2017-07-21
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF10 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there will be more noise in the scene segmentation in this way, and the boundary between the non-road area and the road area is unclear due to the existence of noise.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Height information-based unmanned vehicle lane scene segmentation method
  • Height information-based unmanned vehicle lane scene segmentation method
  • Height information-based unmanned vehicle lane scene segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0025] figure 1 It is a flowchart of the scene segmentation method for unmanned vehicle lanes based on height information in the present invention.

[0026] In this example, if figure 1 Shown, the present invention a kind of unmanned vehicle lane scene segmentation method based on height information, comprises the following steps:

[0027] S1. Use the neural network to encode the lane picture

[0028] In this embodiment, the vehicle-mounted camera is used to collect lane pictures, and then the collected lane pictures are input into the neural network, and the convolution operation and pooling operation of the coding part of the neural network are used to perform feature extraction on the input lane images to obtain a sparse feature map.

[0029] In this embodiment, the specific operations of each convolutional layer are: 1), use the template matrix to perform matrix shift and multiplication operations on the picture pixel matrix, that is, the corresponding positions of the m...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a height information-based unmanned vehicle lane scene segmentation method. According to the method, a neural network is utilized to encode and decode a lane image, so that a thickened feature graph is obtained; pixels in the thickened feature graph are classified through a softmax classifier, so that a pixel point-based lane scene segmentation graph can be obtained; and the division of a vehicle road region and a non-road region is realized through using the correction of height information-based error processing; and therefore, noises generated in segmentation can be decreased, and problems such as the ambiguity of the boundaries of the road region and non-road region caused by the noises can be solved.

Description

technical field [0001] The invention belongs to the technical field of scene segmentation, and more specifically, relates to a scene segmentation method for an unmanned vehicle lane based on height information. Background technique [0002] With the rapid development of national science and technology, it has also led to the improvement of technology such as unmanned vehicles. The field of machine vision, which plays a key role in the intelligent system of unmanned vehicles, occupies an increasingly important position. The analysis of road scenes and Understanding, as an important content of in-vehicle intelligent systems, has naturally become a research hotspot. Scene understanding is a deeper level of object recognition based on image analysis, semantic image segmentation, and finally the classification results of each pixel at the corresponding position. In the future, computer vision will strive to achieve deeper image understanding at the semantic level, not only satisf...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/588G06F18/24
Inventor 程洪郭智豪杨路林子彧
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products