Radar generated color semantic image system and method based on conditional generative adversarial network

A color map and radar technology, applied in the fields of sensors and artificial intelligence, can solve problems such as incomplete road environment information, increased load on unmanned vehicles' battery life calculation chips, and inaccurate accuracy, so as to avoid imaging uncertainty and inaccuracy. The effects of stability, elimination of road shadows, and high efficiency

Active Publication Date: 2018-03-30
BEIHANG UNIV
View PDF7 Cites 75 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The present invention aims at the incomplete road environment information (distance information) that can be sensed by a single image sensor at present, the fusion of multi-sensor data requires additional data fusion strategies and computing resources, and the problem of increas

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Radar generated color semantic image system and method based on conditional generative adversarial network
  • Radar generated color semantic image system and method based on conditional generative adversarial network
  • Radar generated color semantic image system and method based on conditional generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be further described in detail with reference to the accompanying drawings and embodiments.

[0040] The radar generation color semantic image system and method of the present invention are based on the confrontation generation network and use machine learning and deep learning algorithms. On the one hand, machine learning and deep learning algorithms depend on the model architecture, but the bigger factor is whether the problem specification is complete. The present invention solves a color semantic road scene reconstruction problem that has not been considered yet, and the scheme of this problem specification has not been studied in academia. The present invention selects an adversarial generation network framework capable of stipulating the target problem, uses the calibrated radar data, and performs up-sampling to obtain road depth information as much as possible. At the same time, the problem specification of image pairs formed by upsamp...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a radar generated color semantic image system and method based on a conditional generative adversarial network, which belong to the technical fields of sensors and artificial intelligence. The system includes a data acquisition module based on radar point cloud and a camera, an original radar point cloud up-sampling module, a model training module based on a conditional generative adversarial network, and a model using module based on a conditional generative adversarial network. The method provided by the invention includes the following steps: constructing a radar point cloud-RGB image training set; constructing a conditional generative adversarial network based on a convolutional neural network to train a model; and finally, enabling the model to generate a colorroad scene image with meanings in real time in a vehicle environment by using sparse radar point cloud data and the trained conditional generative adversarial network only, and using the color road scene image in automatic driving and auxiliary driving analysis. The network efficiency is higher. The adjustment of network parameters can be speeded up, and an optimal result can be obtained. High accuracy and high stability are ensured.

Description

technical field [0001] The invention relates to a radar-based color semantic image generation system and method based on Generative Adversarial Networks (cGANs), belonging to the technical field of sensors and artificial intelligence. Background technique [0002] In the field of unmanned driving, laser radar (LIDAR) and optical camera are the main sensor devices for unmanned vehicles to perceive the surrounding environment. Vehicle lidar in the form of point cloud, such as figure 1 As shown, the point cloud construction is carried out on the surrounding environment within a certain range, and the perception range is about tens to two hundred meters; while the optical camera can image the surrounding environment to obtain color pictures, such as figure 2 As shown, the perception accuracy and perception distance are related to the optical imaging elements, generally up to hundreds to thousands of meters. [0003] The laser radar perceives the obstacles in the surrounding e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G01S13/89
CPCG01S13/89G06V20/56G06N3/045G06F18/214
Inventor 牛建伟欧阳真超齐之平
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products