Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scene depth and camera position and posture solving method based on deep learning

A technology of scene depth and deep learning, which is applied in the field of camera motion parameters, can solve problems such as time series information modeling, time series learning without image sequences, and limited performance of position and pose estimation.

Active Publication Date: 2019-09-20
EAST CHINA NORMAL UNIV
View PDF2 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this type of method mainly relies on CNN. Since CNN cannot model temporal information, this type of method does not use image sequences for temporal learning, which limits its position and pose estimation performance.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene depth and camera position and posture solving method based on deep learning
  • Scene depth and camera position and posture solving method based on deep learning
  • Scene depth and camera position and posture solving method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0042] Below in conjunction with accompanying drawing the present invention is further described, present embodiment implements under Windows10 64 operating system on PC, its hardware configuration is CPU i7-6700k, memory 16G, GPU NVIDIA GeForce GTX 1070 8G. The deep learning framework uses Keras 2.1.0 and uses Tensorflow 1.4.0 as the backend. Programming adopts python language.

[0043] A method for solving scene depth and camera position and posture based on deep learning. This method inputs an RGB image sequence with a resolution of N×N, where N is 224. It specifically includes the following steps:

[0044] Step 1: Dataset Construction

[0045] Filter B image sequences with the same resolution from the RGBD SLAMDataset dataset of the website https: / / vision.in.tum.de / data / datasets / rgbd-dataset, B is 48, and the number of images in each image sequence is C Frame, 700≤C≤5000, each image sample contains RGB three-channel image data, depth map, camera position and posture, and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a scene depth and camera position and posture solving method based on deep learning. The method employs a convolutional neural network, employs an image sequence as input, and employs a recurrent neural network to estimate the scene depth and the camera position and posture parameters of two adjacent images. According to the method, a multi-task learning framework is adopted, and the loss function of the network is defined by utilizing the consistency of the geometric information of the three-dimensional scene reconstructed by two adjacent images in the sequence, so that the scene depth and the accuracy of the camera position and posture estimation are ensured.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a method for solving scene depth and camera position and posture based on deep learning, using an image sequence as input, and adopting a cyclic neural network to estimate scene depth and camera motion parameters of two adjacent images. Background technique [0002] Depth estimation technology calculates the three-dimensional information corresponding to each pixel based on two-dimensional image information. Most researches on depth estimation methods are based on multiple images, according to the principle of epipolar geometry, combined with the disparity information generated by camera motion during shooting to estimate depth. For a single image, since the disparity information of the scene object cannot be obtained, limited clues can only be obtained through some characteristics and prior knowledge of the image itself to complete the depth estimation, so it has high technical di...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/80G06T7/73G06N3/04
CPCG06T7/80G06T7/73G06N3/045
Inventor 全红艳姚铭炜
Owner EAST CHINA NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products