Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Construction method of multi-plane coding point cloud feature deep learning model based on pointpillars

A deep learning and construction method technology, applied in the field of computer vision, can solve problems such as empty unmanned driving scenes, uneven point clouds collected by lidar, and sparse distant points

Active Publication Date: 2020-09-01
SHANGHAI UNIV
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The complex and changeable application scenarios in areas such as unmanned driving have obvious limitations on the traditional two-dimensional target detection algorithm. In order to improve the accuracy of detection and ensure the safety of drivers, the accuracy and speed of three-dimensional target detection are It is a big challenge, but the unmanned driving scene is empty, and the point cloud collected by the lidar is not uniform, the collected distant points are very sparse, and the deep learning method of the spatial point cloud needs to contain complete spatial information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Construction method of multi-plane coding point cloud feature deep learning model based on pointpillars
  • Construction method of multi-plane coding point cloud feature deep learning model based on pointpillars

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] A method for constructing a multi-plane coded point cloud feature deep learning model based on pointpillars, comprising the following steps:

[0052] Step 1: Obtain a training sample, the training sample includes point cloud data containing the detection target and label information corresponding to the point cloud data, the label information is used to indicate the bounding box coordinates of the detection target in the point cloud data and the detection in the bounding box coordinates Classification label for the target.

[0053] Step 2: Use training samples to train the multi-plane encoding point cloud feature deep learning model, so that the recognition result obtained by inputting the point cloud data in the training sample into the trained multi-plane encoding point cloud feature deep learning model is the point cloud data. The location bounding box coordinates of the detection target and the existence probability of the target in the bounding box coordinates.

...

Embodiment 2

[0086] A method for point cloud data target detection using the pointpillars-based multi-plane coding point cloud feature deep learning model constructed in embodiment 1, the method is to input the collected point cloud data into the multi-plane coding point cloud feature deep learning model for calculation , the multi-plane encoding point cloud feature deep learning model finally outputs the bounding box coordinates of the detected target of the point cloud data and the probability that the detected target exists in the bounding box coordinates.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of computer vision, and particularly discloses a construction method of a multi-plane coding point cloud feature deep learning model based on pointpillars.The construction method comprises the following steps of: acquiring training samples, and training a multi-plane coding point cloud feature deep learning model by adopting the training samples, so that a recognition result obtained by inputting the point cloud data in the training samples into the trained multi-plane coding point cloud feature deep learning model is the position bounding box coordinates of the detection target in the point cloud data and the existence probability of the target in the bounding box coordinates. Three-dimensional space sampling of point cloud data can be realized; the method is advantaged in that the support point cloud features in the three planes are acquired through sampling, learning fusion is carried out on the support point cloud features in the threeplanes, a problem of space information loss of point cloud sampling in the prior art is solved, detection precision loss caused by different angles of the point cloud in each direction in the space isbetter reduced, model robustness is good, and detection accuracy is high.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a construction method of a pointpillars-based multi-plane coded point cloud feature deep learning model. Background technique [0002] Target detection is an important task of computer vision. The purpose is to identify the type of target and locate the position of the target. For the traditional two-dimensional target detection, the field of computer vision has been very mature. The three-dimensional object detection is aimed at the image level and only contains the plane information of the object. With the rapid development of the autonomous driving industry, object detection pays more and more attention to the three-dimensional information of the object, so the three-dimensional object detection technology based on deep learning has also been developed rapidly. , the current 3D object detection technology mainly relies on images and lidar point clouds for e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V2201/07G06N3/045G06F18/253
Inventor 周洋吕精灵李小毛彭艳蒲华燕谢少荣罗均
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products