Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Reconstruction method based on point-line feature rapid fusion

A feature fusion and point feature technology, applied in the field of reconstruction based on the rapid fusion of point and line features, can solve the problems of unreliable results and small number of 3D point clouds, and achieve fast extraction and matching, small reprojection error, and accurate extraction Effect

Active Publication Date: 2019-07-19
XI AN JIAOTONG UNIV
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Especially for 3D reconstruction in an image lacking texture information, a small number of point features will be obtained, and the number of 3D point clouds generated will be less, and the result will be less reliable.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Reconstruction method based on point-line feature rapid fusion
  • Reconstruction method based on point-line feature rapid fusion
  • Reconstruction method based on point-line feature rapid fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] refer to figure 1 : figure 1 The dotted line part is the rapid fusion process of the dotted line feature in the present invention. From figure 1 It can be seen that the point features of the sequence images are first matched and the descriptors are normalized. Then the line segment features are matched from coarse to fine, and finally the point line features are fused to participate in the 3D reconstruction.

[0022] The present invention uses the Swiss Federal Institute of Technology multi-view collection data set fountain-P11 to carry out sparse 3D reconstruction, the experimental environment is a Vmware Ubuntu16.04 virtual machine, the hardware configuration is a 4-core processor with 8G memory, and GPU is not used for acceleration. Use Opencv4.0 to write 3D reconstruction program. Through the experiment process, make further explanation.

[0023] Step 1: Image preprocessing. The dataset includes 11 time-series images with a size of 3072*2048 pixels. Use the E...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a reconstruction method based on point-line feature rapid fusion. The method comprises the following steps of intercepting a video as an image, and performing preprocessing, such as focal length extraction, downsampling, etc., to reduce the reconstruction complexity; carrying out the point feature matching, carrying out the point feature extraction and matching by adoptingscale invariant feature transformation; matching the line features quickly from coarse to fine, using the line segment segmentation detector features for extracting and describing the line features, and obtaining an image line segment feature matching pair through the four steps of violent matching, motion estimation, Hamming distance threshold judgment and length screening on the line segment descriptors; carrying out the point-line feature fusion, converting a final line segment feature matching pair into the pixel points, analyzing the pixel points and the pixel coordinate positions of theexisting point features, deleting the repeated points, and then fusing the line segment pixel points and the point features; calculating a camera outer posture and a three-dimensional point cloud, calculating a substantive matrix by using a final image point-line feature matching pair, solving the camera outer posture, solving the three-dimensional point cloud in a triangularization manner, and optimizing a result by using a beam adjustment method.

Description

technical field [0001] The invention belongs to the field of three-dimensional reconstruction of computer vision, in particular to a reconstruction method based on fast fusion of point-line features. Background technique [0002] 3D reconstruction of images from sparse point clouds, also known as structure recovery from motion. The traditional reconstruction process mainly calculates the camera pose and sparse 3D point cloud through point features. At present, its research is mainly divided into two categories. [0003] One is incremental rebuilding. First, image point features are extracted, usually using scale-invariant feature transformation, and through the extraction and matching of point features between image pairs, the external pose of the camera and the 3D point cloud are solved, and the results are optimized by least squares. [0004] The other is a global rebuild. First of all, it also extracts image point features, extracts and matches image pair point featur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T7/33G06K9/46
CPCG06T17/00G06T7/33G06T2207/10028G06V10/462
Inventor 张元林赵君
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products