Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for constructing semantic map on line by utilizing fusion of laser radar and visual sensor

A visual sensor and laser radar technology, applied in the direction of re-radiation, instrumentation, electromagnetic wave re-radiation, etc., can solve the problem of large data volume, achieve the effect of driving convenience, improving the efficiency of update iterations, and improving the efficiency of map reuse

Pending Publication Date: 2020-11-13
廊坊和易生活网络科技股份有限公司
View PDF6 Cites 42 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, most of the existing semantic maps are offline maps with marked point clouds or objects constructed under some manual intervention, and the amount of data to be collected is huge (usually terabytes). How to store and maintain the massive data in the map online in real time There are still technical bottlenecks in semantic information, real-time retrieval and update based on massive semantic information, which has become an obstacle to the widespread application of semantic maps

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for constructing semantic map on line by utilizing fusion of laser radar and visual sensor
  • Method for constructing semantic map on line by utilizing fusion of laser radar and visual sensor
  • Method for constructing semantic map on line by utilizing fusion of laser radar and visual sensor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0067] like figure 1 as shown, figure 1 It shows a schematic flowchart of a method for constructing a semantic map online by fusion of laser radar and visual sensor provided by an embodiment of the present invention. And vision sensor, method of the present invention may comprise the following steps:

[0068] 101. Obtain the grid map of the current vehicle, the grid map includes a plurality of grids / grids for storing detection targets, and each grid / grid has a unique one-dimensional string identifier associated with the orientation .

[0069] For example, this step 101 may include the following sub-steps:

[0070] 101-1. Obtain the current vehicle location information by means of GPS-RTK.

[0071] It is understandable that the absolute position information (latitude and longitude) of the vehicle obtained through GPS-RTK, the centimeter-level positioning accuracy, and then an initialization grid is established around it (the minimum unit of the grid is 15cm), but the diff...

Embodiment 2

[0105] combine Figure 2 to Figure 6 As shown, the two-dimensional grid map of the current vehicle position is established and initialized. The GEOHASH algorithm is used to encode the initialized two-dimensional grid map into a one-dimensional string, and then the lidar and vision sensors are fused to obtain synchronous data, and the lidar and vision Multi-attribute information such as static and dynamic target categories, locations, and scales detected by the sensor is imported into the Redis database to generate a multi-attribute raster map, such as figure 2 Shown is the process of constructing a semantic high-precision map, and the specific steps are as follows:

[0106] Step 1. Create a grid map. Create a 192m×192m square grid within 96 meters around the vehicle. Each grid size is 15cm×15cm. There are 1,638,400 grids in total, and each grid represents a unique geographic location. The location code value, and the following multi-attribute information of each detected tar...

Embodiment 3

[0176] According to another aspect of the embodiment of the present invention, the embodiment of the present invention also provides a smart car driving system, which may include: a control device and a multi-eye imaging device connected to the control device, the multi-eye imaging device Including: lidar and vision sensors;

[0177] After the control device receives the ranging data and image data respectively collected by the laser radar and the visual sensor, it constructs the three-dimensional semantic map of the smart car by using the method for constructing the semantic map online described in the first or second embodiment above.

[0178] In practical applications, an embodiment of the present invention also provides a smart car, which may include the above-mentioned smart car driving system.

[0179]Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems or computer program products. Accordingly, the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for constructing a semantic map on line by utilizing fusion of a laser radar and a visual sensor. The method comprises the following steps: acquiring an initialized grid map of a current vehicle, and acquiring distance measurement data corresponding to the laser radar and image data corresponding to the visual sensor; performing target detection processing on theranging data of the laser radar to obtain multi-attribute information of a plurality of first-class detection targets; performing feature extraction and matching on the image data of the visual sensorto obtain multi-attribute information of a plurality of second-class detection targets; fusing the multi-attribute information of the first type of detection targets and the second type of detectiontargets, importing the fused multi-attribute information of the detection targets into a Redis database, generating a high-dimensional grid map serving as a semantic map, and storing the multi-attribute information of each detection target in the high-dimensional grid map in a dynamic database table mode. According to the method, the multi-dimensional semantic information of the dynamic and staticenvironments around the vehicle can be represented online in real time.

Description

technical field [0001] The invention relates to the technical field of map construction, in particular to a method for constructing a semantic map online by fusion of a laser radar and a visual sensor. Background technique [0002] The current description of the driverless road environment can be roughly divided into high-precision maps, occupancy grid maps, cost maps, topological maps, and semantic maps. High-precision maps can accurately represent the road network in three dimensions, with good visualization effects and centimeter-level positioning accuracy, and provide a large amount of auxiliary information (such as intersection layout and road sign locations, etc.) for driverless driving. However, high-precision maps require a large amount of manual labeling work offline, and can only contain static road information. Data collection and update costs are high and efficiency is low. [0003] The cost map is composed of multiple layers of occupancy grid maps with differen...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/32G01S17/86G01S17/931
CPCG01C21/32G01S17/86G01S17/931
Inventor 安成刚张立国张旗李巍李会祥吴程飞张志强王增志史明亮
Owner 廊坊和易生活网络科技股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products