Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Depth Camera Localization Method Based on Convolutional Neural Network

A convolutional neural network and depth camera technology, applied in the field of robot vision positioning, can solve problems such as positioning failure, loss of information, and inability to fully extract unstructured point cloud features, and achieve improved convenience, deployment and use Effect

Active Publication Date: 2021-02-19
HANGZHOU LANXIN TECH CO LTD
View PDF20 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

There is no complete set of methods for directly merging pose calculations using color maps and depth maps, and the information of the two is not deeply fused, resulting in a large computational complexity
At the same time, since there are certain problems in the positioning stability of a single color image or a single depth image, the reliability of the above simple combination of the two to calculate the pose is further reduced, and it is prone to positioning failures.
[0010] In terms of using convolutional neural networks to perform pose calculations on depth maps, since ordinary convolutions cannot fully extract the features of unstructured point clouds, adjacent pixels can be far apart in space.
And for the task of judging the pose, the normalization of the depth map will lose most of the information. Therefore, the current general-purpose convolutional neural network cannot successfully process the depth map information.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Depth Camera Localization Method Based on Convolutional Neural Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to further understand the present invention, the preferred embodiments of the present invention are described below in conjunction with examples, but it should be understood that these descriptions are only to further illustrate the features and advantages of the present invention, rather than limiting the claims of the present invention.

[0039] An embodiment of the present invention provides a method for positioning a depth camera based on a convolutional neural network. The color image acquired by the depth camera is fused with the depth image as input, and the pose of the camera is calculated.

[0040] In this embodiment, a convolutional neural network capable of fusing input color image and depth image information and outputting image shooting poses is firstly established.

[0041] Specifically, the convolutional neural network proposed in this embodiment is composed of a graph fusion generation module and a graph neural network module.

[0042] In this e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a depth camera positioning method based on a convolutional neural network, comprising the following steps: establishing a convolutional neural network, the convolutional neural network includes a graph fusion generation module and a graph neural network module, and the graph fusion generation module It is used to jointly encode the color image obtained by the depth camera and the depth image to generate an interconnected fusion image, and the graph neural network module is used to process the fusion image; the convolutional neural network is trained based on an offline sample set, and the The above offline sample set includes batch images taken by the depth camera in the area to be located and the camera position when each frame of image is taken; when positioning is required, use the depth camera to take a set of color images and depth images in the area to be located as input , outputting the position of the camera for capturing the set of image data through the convolutional neural network. Through the method of the present invention, the color image acquired by the depth camera and the depth image can be fused and input to calculate the camera pose.

Description

technical field [0001] The invention relates to the technical field of robot vision positioning, in particular to a depth camera positioning method based on a convolutional neural network. Background technique [0002] Positioning refers to determining the position and heading of the mobile robot relative to the global coordinates in the operating environment, and is the most basic link in the navigation of the mobile robot. At present, according to the different types of sensors and information used, the mainstream positioning technologies include magnetic stripe positioning, laser positioning, two-dimensional code positioning, visual positioning, etc. Among them, visual positioning technology is similar to human beings who determine their own position by observing the surrounding environment with their eyes, and capture image information through a visual camera, process and calculate it, so as to obtain the position of the robot in space. Due to the low cost of visual cam...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/73G06N3/04G06N3/08
CPCG06T7/73G06N3/08G06T2207/10024G06N3/045
Inventor 郑振浩周玄昊刘志鹏
Owner HANGZHOU LANXIN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products