Camera attitude estimation method based on deep neural network

A technology of deep neural network and camera pose, which is applied in the field of camera pose estimation based on deep neural network, can solve the problem of poor performance of pose network and achieve the effect of improving performance

Active Publication Date: 2019-11-22
TIANJIN UNIV
View PDF3 Cites 45 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The pose estimation method based on deep learning is very dependent on the extracted features. The nature of the feature representation determines the effect of pose estimation. The pose network trained using features related to image surface information often performs poorly in unfamiliar scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Camera attitude estimation method based on deep neural network
  • Camera attitude estimation method based on deep neural network
  • Camera attitude estimation method based on deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] The present invention will be further described in detail below in conjunction with the accompanying drawings and through specific embodiments. The following embodiments are only descriptive, not restrictive, and cannot limit the protection scope of the present invention.

[0029] The camera pose estimation method based on the deep neural network of the present invention adopts an unsupervised training method, and introduces a joint training strategy of optical flow and pose, so that the extracted features have scene geometric features and improve the accuracy of pose estimation.

[0030] Specific steps are as follows:

[0031] 1) Build a camera pose estimation network, such as figure 1 As shown, the model is designed based on a stacked convolutional neural network structure, including convolutional layers, deconvolutional layers, and fully connected layers;

[0032] The pose estimation network of the present invention is mainly composed of three sub-networks, includin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a camera attitude estimation method based on a deep neural network. The method comprises the following steps: 1) constructing a camera attitude estimation network; 2) constructing an unsupervised training scheme, respectively reconstructing corresponding images from the input front and back frame images by using the estimated depth map, the inter-frame relative pose and theoptical flow, and constructing a loss function of the network by using a luminosity error between the input image and the reconstructed image; 3) sharing a feature extraction part by the pose estimation module and the optical flow estimation module, and enhancing the geometrical relationship of the features to the interframes; and 4) inputting a to-be-trained single-viewpoint video, outputting acorresponding inter-frame relative pose, and reducing the loss function through an optimization means to train the model until the network achieves convergence. According to the model provided by theinvention, the camera pose of the corresponding sequence is output by inputting the single-viewpoint video sequence, the training process is carried out in an end-to-end unsupervised mode, and pose estimation performance is improved through optical flow and pose joint training.

Description

technical field [0001] The invention belongs to the field of computer vision and relates to a camera pose estimation method, in particular to a camera pose estimation method based on a deep neural network. Background technique [0002] Camera pose estimation, as the most important part of Simultaneous Localization and Mapping (SLAM) technology, has attracted extensive attention in the field of computer vision and the robotics community in the past few decades. At present, it has been widely used in various aspects such as GPS global positioning system and inertial navigation system (INS) of various robots. [0003] Although traditional pose estimation algorithms such as ORB-SLAM and VINS-mono can achieve quite high accuracy, these algorithms often cannot cope with scene changes, and the effect of image processing with sparse textures will be greatly reduced. Convolutional neural network (CNN) has achieved good results in traditional computer vision tasks such as target dete...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/70
CPCG06T7/70G06T2207/30244G06T2207/20084G06T2207/20081
Inventor 侯永宏李翔宇吴琦李岳阳郭子慧刘艳
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products