Method for realizing scene structure prediction, target detection and lane level positioning

A target detection and lane-level technology, applied in the field of neural networks, can solve problems such as poor ability to adapt to unfamiliar scenes, inability to complete accurate positioning in tunnels or poor signal scenes, and low GPS positioning accuracy, so as to reduce prediction time and avoid manual labeling The effect of work

Pending Publication Date:
View PDF1 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] To sum up, the problems existing in the existing technology are: most of the current GPS positioning accuracy is not high and cannot complete the precise positioning of tunnels or poor signal scenarios
From the perspective of network adaptability, the patent CN111047630A has poor ability to adapt to unfamiliar scenes, because if the environment changes slightly, the target detection network needs to label a large number of new data sets for training

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for realizing scene structure prediction, target detection and lane level positioning
  • Method for realizing scene structure prediction, target detection and lane level positioning
  • Method for realizing scene structure prediction, target detection and lane level positioning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] The technical solutions in the embodiments of the present invention will be described clearly and in detail below with reference to the drawings in the embodiments of the present invention. The described embodiments are only some of the embodiments of the invention.

[0056] The technical scheme that the present invention solves the problems of the technologies described above is:

[0057] Such as figure 1 As shown, a neural network for lane-level positioning, scene structure prediction and target detection provided by an embodiment of the present invention includes the following steps:

[0058] 1. Construct a multi-task neural network with lane-level positioning, scene structure prediction and target detection. The multi-task neural network structure of scene structure prediction and target detection is as follows: figure 2 As shown, the scene structure prediction and target detection multi-task neural network in the method of the present invention adopts the contex...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for realizing scene structure prediction, target detection and lane level positioning, and relates to the fields of automatic driving, deep learning, computer vision and the like. The method comprises the following steps: firstly, constructing a neural network for lane-level positioning, scene structure prediction and target detection, and constructing a loss function mathematical model through loss between a scene structure prediction value and a target detection prediction value and a true value thereof; making a data set through an image and a map, and training the network; deploying the network on an vehicle to output a detection result; and finally, carrying out retrieval matching on the output scene structure and a map through a matching method, correcting the positioning error of the vehicle, and realizing lane-level positioning. According to the network, a data set can be made through images and maps, closed-loop training is carried out on the network, and scene structure prediction, a target detection function and lane level positioning can be completed only through image information and map information. The road structure contained in the scene structure prediction result can be used in automatic driving.

Description

technical field [0001] The invention belongs to the fields of automatic driving, deep learning, computer vision and the like, and is a neural network for lane-level positioning, scene structure prediction and target detection. Background technique [0002] With the development of deep learning, autonomous driving technology has become more and more mature, and the car's ability to perceive the environment has gradually improved. Most of today's autonomous driving platforms still use some powerful sensors (such as lidar, precision GPS, etc.) for environmental perception solutions, but most of these sensors are expensive and bulky. If only visual sensors are used to complete environmental perception tasks, it will be greatly cut costs. At present, most of the GPS used for positioning are prone to deviation or inaccurate positioning due to their low accuracy, and GPS is still unable to achieve accurate positioning in tunnels or remote areas with poor or no signal. The present...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V10/74G06V10/82G06V20/58G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/22G06F18/2431
Inventor 冯明驰梁晓雄萧红岑明李成南王鑫宋贵林邓程木
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products