Beidou visual fusion accurate lane identification and positioning method and implementation device thereof

A positioning method and lane technology, applied in the field of intelligent transportation, can solve problems such as single special lane, unable to solve precise lane identification and positioning

Active Publication Date: 2019-09-06
SHANDONG UNIV
View PDF15 Cites 35 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The current solution can only solve one of the problems, and the solution is only suitable for a single

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Beidou visual fusion accurate lane identification and positioning method and implementation device thereof
  • Beidou visual fusion accurate lane identification and positioning method and implementation device thereof
  • Beidou visual fusion accurate lane identification and positioning method and implementation device thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0120] A Beidou vision fusion precise lane identification and positioning method, such as figure 1 shown, including the following steps:

[0121] (1) Lane detection, including:

[0122] Step A1: The image acquisition device takes pictures of the road ahead and stores them; Figure 4 As shown, the abscissa is the width of the picture, and the ordinate is the height of the picture; the image acquisition device is a driving recorder or a 120° wide-angle camera, and the device is installed on the center line of the front windshield and directly in front of the rearview mirror. 20° included angle. The vehicle is driving on the road with lane markings, and the image acquisition device continues to capture the front image and store it in the video memory SD card during the driving process.

[0123] Step A2: Extract road features according to road pictures, detect and recognize lane lines and divide lanes; use programmable logic processor FPGA to read the video in the video memory...

Embodiment 2

[0137] According to the Beidou vision fusion precise lane identification and positioning method described in Embodiment 1, the difference is that:

[0138] Step A2, including:

[0139] Such as image 3 As shown, the lane detection network includes VGG16 feature extraction layer, lane line detection layer, Meanshift lane segmentation layer, ResNet lane classification layer; VGG16 feature extraction layer is used for road map feature extraction, ResNet lane classification layer is used for lane line classification, Meanshift lane The segmentation layer completes the lane clustering, and the lane line detection layer separates the lane line from the background into 0 and 1.

[0140] a. Send the road picture to the VGG16 convolutional neural network (VGG16 feature extraction layer) after mean value preprocessing, which means: subtract the mean value of all pixels from each pixel in the road picture, and send it to the VGG16 convolutional neural network. Get the feature map; the ...

Embodiment 3

[0160] According to the Beidou vision fusion precise lane identification and positioning method described in embodiment 1 or 2, the difference is that:

[0161] Step B1 means:

[0162] Capture Beidou satellite signals, track Beidou satellite signals, mediate navigation messages, measure pseudo-range, carrier phase, and calculate latitude and longitude.

[0163] Step B2, including:

[0164] e. Input the positioning information currently obtained by the Beidou receiver, and initially match it with the electronic map road network database. The initial matching refers to using the hidden Markov model HMM method to find multiple possible matching initial road sections according to the positioning information;

[0165] F, enter tracking matching, and tracking matching refers to matching the current moment location point (determined by location information) to the same road section matched with the previous moment location point; when arriving at a complex section such as a road int...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a Beidou visual fusion accurate lane identification and positioning method and an implementation device thereof. The Beidou visual fusion accurate lane identification and positioning method comprises the following steps: (1) lane detection: detecting and identifying lane lines and dividing the lanes; identifying current, left and right lane types, and determining the current lane of the vehicle; (2) performing map matching to obtain positioning information; determining a road where the vehicle is located, and extracting information such as road lane number, lane type,lane width, bus lane opening time and the like; measuring the vehicle speed, and obtaining BTD; (3) positioning correction: correcting a Beidou positioning result; and (4) monitoring and early warning, judging whether the vehicle runs on the road, and if so, capturing, identifying and uploading. By means of the Beidou visual fusion accurate lane identification and positioning method, a common lane, a bus lane, an emergency lane and a reversible lane can be accurately identified, meanwhile, different colors of masks are used for distinguishing, so as to provide a reference for a driver, and provide more accurate positioning for an auxiliary driving and navigation system.

Description

technical field [0001] The invention belongs to the technical field of intelligent transportation, and in particular relates to a Beidou vision fusion lane identification and precise positioning method and an implementation device thereof. Background technique [0002] With the continuous development of modern transportation systems, special motor vehicle lanes such as bus lanes, emergency lanes, and tidal lanes are becoming more and more common. These special lanes improve the efficiency of urban vehicle traffic, effectively solve the problem of traffic congestion, and at the same time gain valuable time for special vehicles such as ambulances. [0003] However, there are still situations where private cars occupy bus lanes, illegally occupy emergency lanes, and do not obey the rules of tidal lanes. It is easy to have a fluke mentality; second, the urban road traffic situation is complex, different cities have different management methods for special lanes (such as time-sh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G08G1/017G01S19/42G01S19/52
CPCG08G1/0175G08G1/017G01S19/42G01S19/52G06V20/584G06V20/588G06N3/045Y02T10/40
Inventor 邢建平许才溢王胜利宁亚飞于明卫刘海锐
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products