Unlock instant, AI-driven research and patent intelligence for your innovation.

Camera pose estimation method and device

A pose estimation and camera technology, applied in the field of pose estimation, can solve the problem of inaccurate camera trajectory and pose estimation, and achieve the effect of improving positioning accuracy and accuracy

Pending Publication Date: 2019-11-26
BEIJING YINGPU TECH CO LTD
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, most of the existing camera positioning methods based on deep learning only obtain the camera pose based on pixel and depth data, which easily leads to inaccurate estimation of the camera trajectory and pose.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Camera pose estimation method and device
  • Camera pose estimation method and device
  • Camera pose estimation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] figure 1 is a flowchart of a camera pose estimation method according to an embodiment of the present application. see figure 1 , the method includes:

[0044] 101: Acquiring images and inertial measurement unit IMU data from a data set;

[0045] Among them, the IMU is a device that measures the three-axis attitude angle (or angular rate) and acceleration of an object. Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes. The accelerometer detects the acceleration signals of the three-axis independent object in the carrier coordinate system, and the gyroscope detects the angular velocity signal of the carrier relative to the navigation coordinate system. Measure the angular velocity and acceleration of the object in three-dimensional space, so that the attitude of the object can be calculated.

[0046] 102: Use the deep convolutional neural network CNN to perform convolution and pooling operations on the image to obtain the fea...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a camera pose estimation method and device, and belongs to the field of pose estimation. The method comprises the following steps: acquiring an image and IMU data from a data set; carrying out convolution and pooling operation on the image by using a CNN to obtain feature points and corresponding descriptors of the image, and then carrying out calculation to obtain motion data of the image; processing the IMU data by using LSTM (Long Short Term Memory) to align the motion data with the IMU data; and combining the aligned motion data and IMU data, and sending the combined data to a full connection layer of the CNN for feature fusion and pose estimation to obtain motion features of the camera. The device comprises an acquisition module, a CNN module, an LSTM module and a combination module. According to the method, the problem of camera positioning in VSLAM is solved, the defect that the capability of processing IMU high-frequency data is insufficient in the priorart is overcome, and the calculation precision and the positioning accuracy are improved by adopting a method of combining high-frequency inertial data and low-frequency images.

Description

technical field [0001] The present application relates to the field of pose estimation, in particular to a camera pose estimation method and device. Background technique [0002] VSLAM (Visual simultaneous localization and mapping, vision-based real-time positioning and map construction) refers to the process of calculating one's own position while building an environmental map based on the information of the visual sensor, which can solve the problem of positioning and map construction when moving in an unknown environment , more precisely and quickly. The current mainstream VSLAM framework is mainly affected by real-time, environment, lighting and other conditions. The output of the SLAM (Simultaneous Localization And Mapping, instant positioning and map construction) system includes the positioning of the camera (that is, the motion state of the camera) and the environmental map. For image-based camera localization and VSLAM systems, maps are an important component. Th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/70G06T7/246
CPCG06T2207/20084G06T2207/20221G06T7/246G06T7/70
Inventor 宋旭博
Owner BEIJING YINGPU TECH CO LTD