Supercharge Your Innovation With Domain-Expert AI Agents!

A Method for 3D Object Point Cloud Recognition Based on Subflow Pattern Sparse Convolution

A three-dimensional target and sub-flow technology, which is applied in the fields of deep learning and three-dimensional target detection and recognition, can solve problems such as less computing resources, achieve stable training process, reduce training difficulty, and achieve time efficiency and recognition accuracy.

Active Publication Date: 2022-06-07
YANSHAN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In terms of convolution, this is the same as traditional regular convolutional neural networks, but they require less computing resources in terms of floating point operations and memory

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Method for 3D Object Point Cloud Recognition Based on Subflow Pattern Sparse Convolution
  • A Method for 3D Object Point Cloud Recognition Based on Subflow Pattern Sparse Convolution
  • A Method for 3D Object Point Cloud Recognition Based on Subflow Pattern Sparse Convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] Below in conjunction with embodiment, the present invention is described in further detail:

[0042] like Figure 1 to Figure 2 As shown, a method for 3D target point cloud recognition based on sub-flow type sparse convolution includes the following steps:

[0043] Step 1: Get the initial point cloud data of the target scene:

[0044] The target scene can be an outdoor scene or an indoor scene. It is necessary to obtain the initial point cloud data of the target scene, which can be acquired by a depth camera, or can be obtained by using other monocular or binocular imaging systems. Common depth cameras include Kinect and TOF cameras.

[0045] Step 2: Based on the initial point cloud data and the sub-flow type convolutional neural network, use the sub-flow type sparse convolution to perform local feature extraction to obtain local features of the target point cloud;

[0046] The point cloud itself has the characteristics of sparseness. When the sub-flow convolutional ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for recognizing a three-dimensional target point cloud based on subflow type sparse convolution, which comprises the following steps: S1 acquires the initial point cloud of the target scene; S2 acquires the local features of the target point cloud; S3 convolves the subflow type The output of the network serves as the input of two identical MLP networks; S4 outputs the first MLP network to the cross-entropy loss function; S5 outputs the second MLP network to two attention-based graph convolutional neural networks, and finally Output to the square error loss function; in addition, the second MLP network is directly output to the square error loss function without the graph convolutional neural network; S6 selects the sum of the cross entropy loss function and the square error loss function as the total loss function of the network model, according to The size of the total loss function value is used for reverse training of the network model. The invention speeds up the network training speed, improves the recognition accuracy, improves the defect of occupying a large memory space, and realizes fast and efficient three-dimensional target recognition.

Description

technical field [0001] The invention relates to the fields of deep learning and three-dimensional target detection and recognition, in particular to a method for identifying three-dimensional target point clouds based on sub-flow type sparse convolution. Background technique [0002] In recent years, convolutional neural networks have set off an upsurge in deep learning, computer vision research and their applications. Because of their powerful feature learning capabilities, they have attracted widespread attention from experts and scholars at home and abroad. However, when using convolutional networks to process some naturally sparse input spatiotemporal data, such as point clouds obtained by lidar scanners or RGB-D cameras, they are inherently sparse. Applying convolutional networks to such sparse data is very inefficient, so how to deal with spatially sparse data more efficiently and use them to develop spatially sparse convolutional neural networks is the top priority of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/64G06V10/44G06V10/82G06N3/04G06N3/08
CPCG06N3/084G06N3/045
Inventor 林洪彬杨博郭聃陈泽宇关勃然魏佳宁
Owner YANSHAN UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More