Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Camera pose estimation method

A pose estimation and camera technology, applied in computing, navigation computing tools, image data processing, etc., can solve the problem of low design accuracy of robot speed odometry, and achieve the goal of improving design accuracy, shortening matching time, and improving estimation accuracy. Effect

Pending Publication Date: 2021-02-23
APPLIED TECH COLLEGE OF SOOCHOW UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the deficiencies in the above-mentioned technologies, the present invention provides a method for estimating the pose of a camera, which estimates the pose of the robot body through the fusion of the depth camera and the inertial measurement for data collection and matching, and solves the problem of the robot speed odometer. The problem of low design accuracy is used to improve the design accuracy of the robot speed odometer

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Camera pose estimation method
  • Camera pose estimation method
  • Camera pose estimation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] Such as Picture 1-1 As shown, the present invention provides a camera pose estimation method, which is used in the technical field of robot navigation and positioning. The camera pose estimation method includes the following steps, as shown in Figure 1:

[0058] S10. Data collection: the depth camera collects the image frame data stream, and obtains the image data frame including the RGB image and the depth image at the first resolution;

[0059] S20. Data processing: extract ORB features from the image data frame, obtain FAST corner points with direction vectors and BRIEF descriptors containing several-dimensional binary vectors; build angular velocity measurement models for acceleration and gyroscopes, and obtain the direction of inertial measurement unit data vector;

[0060] S30. Data fusion: keep the FAST corner points consistent with the IMU data direction vector direction, match the BRIEF descriptors of the two images, and if the matching is successful, obtain ...

Embodiment 2

[0083] On the basis of Embodiment 1, the embodiment of the present invention mainly describes the acquisition of FAST corner points. Get a FAST corner point with a direction vector like Figure 3-1 shown, including the following steps:

[0084] S21. Downsampling once: the image data frame of the first resolution is subjected to a downsampling process to obtain the image data frame of the second resolution;

[0085] S22. Extracting FAST corner points: by comparing pixel brightness, extracting FAST corner points from the image data frame of the second resolution, adding feature point arrays;

[0086] S23. Obtain the direction vector of the FAST corner point: determine whether the acquired FAST corner point is in the feature point array: if yes, retain the feature value of the FAST corner point, rotate the FAST corner point to obtain its direction vector; if not, Delete the eigenvalues ​​of the FAST corner;

[0087] S24. secondary downsampling: carry out secondary downsampling...

Embodiment 3

[0090] On the basis of Embodiment 2, the embodiment of the present invention mainly provides an example of extracting FAST corner points in step S22. Such as Figure 3-2 As shown, the extraction of FAST corner points includes the following steps:

[0091] From the image data frame of second resolution, extracting brightness is the target pixel of Ai, and setting brightness threshold is T;

[0092] Taking the target pixel as the origin, take several pixels on a radius circle and number them clockwise;

[0093] Detect the brightness of several clockwise pixel points to determine whether the target pixel is a FAST corner point: if the brightness of 3 or more pixels is greater than Ai+T or greater than Ai-T at the same time, and the brightness of 12 consecutive pixels is greater than Ai+ at the same time T or less than Ai-T, then determine the target pixel as a FAST corner point and add it to the feature point array;

[0094] If it is not satisfied that the brightness of three ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a camera pose estimation method which comprises the following steps: acquiring an image frame data stream, and acquiring an image data frame containing an RGB image and a depthimage at a first resolution; extracting ORB features, and obtaining FAST angular points with direction vectors and BRIEF descriptors containing a plurality of dimensional binary vectors; building anacceleration and gyroscope angular velocity measurement model, and obtaining a direction vector of inertia measurement unit data; reserving a FAST angular point consistent with the data direction vector direction of the inertial measurement unit, matching the BRIEF descriptor, and after matching succeeds, obtaining depth data of the FAST angular point from the depth camera; expressing the pose byadopting a Lie algebra, obtaining an objective function, deriving the objective function by using a Lie algebra disturbance model, and finding a local minimum value as a pose estimation value. Data acquisition and matching are carried out through fusion and cooperation of the depth camera and inertia measurement to estimate the posture of the robot body, and the design precision is improved.

Description

technical field [0001] The present invention relates to the technical field of robot navigation and positioning, more particularly, the present invention relates to a kind of camera pose estimation method. Background technique [0002] The navigation and positioning of robots are generally completed by separate or fusion technologies such as radar, vision, and inertial navigation. Vision is also divided into monocular, binocular, and depth cameras and other image acquisition methods. [0003] In the inertial relative pose measurement method, inertial sensors such as accelerometers and gyroscopes are directly fixed on the measured object and the motion reference system, and various inertial sensors can also be combined into an inertial measurement unit before installation. The visual pose measurement method usually extracts feature point information from the captured image, and then calculates the pose information of the target object relative to the reference coordinate syst...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/00G01C21/16G01C21/20G06T7/73
CPCG01C21/005G01C21/165G01C21/20G06T7/75
Inventor 史梦安程巍任艳任勇徐云龙胥薇杨艳红陈阳子陈年飞杨敬晶马壮
Owner APPLIED TECH COLLEGE OF SOOCHOW UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products