Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

End-to-end unsupervised scene passable area cognition and understanding method

A traffic area, unsupervised technology, applied in the field of traffic control, which can solve problems such as unsatisfactory results, mutual interference, and adverse effects of smart cars.

Active Publication Date: 2018-11-23
CHANGAN UNIV
View PDF5 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, intelligent cars mainly combine radar and cameras to recognize and understand the driving area. However, radar (lidar, millimeter wave radar, ultrasonic radar) usually has high cost, high power consumption and is prone to mutual interference.
[0003] The vision-based drivable area cognition and understanding method is mainly based on the road surface color, road model, road surface texture features, etc. to obtain the basic structural features of the road surface, and further obtain the vanishing point, road edge line, and the basic direction of the road (direction) through these features. walking, left turn, right turn, sharp left turn, sharp right turn) and other potential information, use the traditional segmentation extraction method for the final extraction of the drivable area for these features, but this method of using traditional segmentation is often not ideal. Some traffic participants such as vehicles and pedestrians may be extracted into the drivable area, causing adverse effects on the next step of the smart car.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • End-to-end unsupervised scene passable area cognition and understanding method
  • End-to-end unsupervised scene passable area cognition and understanding method
  • End-to-end unsupervised scene passable area cognition and understanding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] The present invention is described in further detail below in conjunction with accompanying drawing:

[0030] Such as figure 1 As shown, an end-to-end unsupervised scene road area determination method includes the following steps:

[0031]1) Using the distribution law of the road area in space and images, the road location prior probability distribution map is constructed based on statistics and directly added to the convolutional layer as a feature map of the detection network, and the location prior information is constructed in the The prior probability distribution map of the location of the passable area that can be flexibly applied in the actual road traffic environment;

[0032] 2), Aiming at the cognition and understanding method of the passable area, which is the problem of road surface detection and segmentation, a new deep network architecture—UC-FCN network is constructed by combining the fully convolutional network (FCN) and U-NET, as the main network for ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an end-to-end unsupervised scene road surface area determination method. A road position prior probability distribution diagram is constructed; furthermore, the road position prior probability distribution diagram is used as characteristic mapping of a detection network, and directly added into a convolutional layer; a convolutional network framework fused with position prior characteristics is constructed; then, in combination with a fully convolutional network and a U-NET, a deep network architecture-UC-FCN network is constructed; a constructed passable area positionprior probability distribution diagram is used as characteristic pattern mapping of the deep network architecture-UC-FCN network; a UC-FCN-L network is generated; a passable area is detected based ona vanishing point detection method; furthermore, the obtained detection result is used as the truth value of a training dataset, and used for training the UC-FCN-L network; a deep network model used for extracting a passable area is obtained; the problem that the passable area is labelled difficultly can be solved; the applicability is high; steady operation in multiple road environments can be realized; furthermore, the real-time performance is better; the method in the invention is high in detection accuracy rate, and good in adaptation, real-time performance and robustness; and in addition,the method is simple and effective.

Description

technical field [0001] The invention belongs to the technical field of traffic control, and in particular relates to an end-to-end self-monitoring scene passable area cognition and understanding method based on a video data set. Background technique [0002] With the development of society, cars have become an irreplaceable means of transportation for human daily life. However, its security problems are becoming more and more prominent. The "Global Road Safety Status Report" pointed out that the number of deaths caused by traffic accidents is as high as 1.24 million per year, and the main causes of accidents are driver negligence and fatigue driving. In order to alleviate this situation, the development of automobile intelligent technology is particularly important , in the research of automatic driving and advanced assisted driving based on computer vision, the real-time cognition and understanding of the drivable area in front of the vehicle is an essential link. The driv...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/194G06T7/13G06T5/50G06N3/08G06N3/04
CPCG06N3/088G06T5/50G06T7/13G06T7/194G06N3/048G06N3/045
Inventor 赵祥模刘占文樊星高涛董鸣沈超王润民连心雨徐江张凡
Owner CHANGAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products