Double-frame depth and motion estimating method based on convolutional neural network

A convolutional neural network, dual-frame technology, applied in biological neural network models, neural architectures, computing, etc., can solve problems such as limited application scope, inaccurate depth and camera motion estimation, and achieve the effect of saving memory

Active Publication Date: 2017-05-31
SHENZHEN WEITESHI TECH
View PDF7 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problems of inaccurate depth and camera motion estimation and not wide application range, the purpose of the present invention is to provide a method for estimating depth and motion based on convolutional neural networks with two frames

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Double-frame depth and motion estimating method based on convolutional neural network
  • Double-frame depth and motion estimating method based on convolutional neural network
  • Double-frame depth and motion estimating method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0028] figure 1 It is a system flowchart of a method for estimating depth and motion of a dual-frame based convolutional neural network in the present invention. It mainly includes image input; self-service network processing; iterative processing; image refinement; and obtaining estimation results.

[0029] Wherein, in the image input, an indoor scene image with depth and camera pose is selected as a scene data set, including a variety of different scenes from cartoon to realistic; when sampling image pairs from the data set, automatically discard those with highlight consistency image pairs, and split the dataset such that the same scene does not appear in both the training and test...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an image pixel classifying method based on the convolutional neural network. The method mainly includes the steps of image inputting, self-help network processing, iterative processing, image thinning and estimation result obtaining. The process includes estimating depth and camera motion by adopting the convolutional network and includes three stages that image pairs are sampled from a scene data set, and the image pairs with highlight conformity errors are discarded; the image pairs obtained after preprocessing are input into a self-help network to calculate optical flow, depth and camera motion; an iterative network is used for conducting iterative repetition multiple times to improve an existing estimation result; a high-resolution depth image and motion estimation are obtained after refining of a refining network. The network is obviously superior to a traditional motion structure, and the result is more accurate and robuster; different from the network of estimating depth through the single image, the network learns the concept of learning, is capable of using motion parallax for processing a new type of scene and allows motion estimation.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a convolutional neural network-based dual-frame estimation depth and motion method. Background technique [0002] With the rapid development of science and technology, in the field of deep learning research, the structure of motion is a long-term task in computer vision. Systems representing the state of the art are elaborate pipelines consisting of several sequential processing steps, which have certain inherent limitations. Before starting to estimate camera motion, the structure of the scene is usually inferred by dense consensus search, and incorrect estimation of camera motion leads to erroneous depth predictions. Furthermore, the process of estimating camera motion from the sparse agreement computed from keypoint detection and descriptor matching is prone to anomalies and does not work well in non-textured regions, and all motion structure methods are not suitable for small ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/207G06N3/04
CPCG06T2207/10016G06T2207/20081G06N3/045
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products