Multi-view depth acquisition method

An acquisition method and multi-view technology, applied in the field of computer vision, can solve problems such as speeding up the generation of depth maps, and achieve the effect of improving recognition ability, fast speed and good robustness

Pending Publication Date: 2022-01-21
SHENYANG POLYTECHNIC UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] In order to obtain higher-precision depth information, solve the dependence on feature matching caused by the limitation of extracting feature points in traditional depth acquisition methods, and speed up the generation of depth maps at the same time

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-view depth acquisition method
  • Multi-view depth acquisition method
  • Multi-view depth acquisition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] Below in conjunction with accompanying drawing, the present invention will be further described:

[0034] Such as figure 1 As shown, a multi-view depth acquisition method specifically includes the following steps:

[0035] Image input: Multiple images are input, and the same camera acquires images at multiple locations. Here, the images acquired at multiple positions are divided into a reference image and multiple target images, and these images are RGB three-channel images of 128×160 pixels. The position where the reference image is acquired is called the reference angle of view, and the position where the target image is acquired is called the target angle of view. In this method, an image sequence of another scale can be obtained by downsampling the image sequence, and the length and width of each downsampling are 1 / 2 of the original. If downsampling is done n times, the sequence numbers of the final image sequence are arranged in reverse order as L=n, n−1, . . ....

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-view depth acquisition method, relates to the field of computer vision and the technical field of deep learning, solves a depth map by using a machine learning mode, and has better robustness for shooting angle problems such as wide baselines and complex texture and shadow problems such as rough areas, weak texture areas and shielding. A CBAM attention mechanism is introduced into a feature extraction module, and features obtained by convolution each time are sorted in two directions of a channel dimension and a space dimension. Hopping layer connection in a feature extraction Unet structure ensures that high-level information is not covered, and low-level information is obtained at the same time. The feature extraction Unet and the CBAM attention mechanism fully consider the relation of geometric mapping of different view angles, and the recognition capability of the feature extraction module on different view angle features is improved. In a cost regularization part, a mode of combining 3D convolution and bidirectional long short-term memory (LSTM) is used, and the three-dimensional variance characteristics are regularized from two aspects of depth dimension and channel dimension, so that the network processing is improved, and the generation speed is high.

Description

Technical field: [0001] The present invention relates to the field of computer vision and the field of deep learning technology, in particular to a multi-view depth acquisition method. Background technique: [0002] Three-dimensional reconstruction refers to the use of images of three-dimensional objects in the real world to establish a three-dimensional model stored by a computer. It is a key technology for using a computer to store the three-dimensional geometric structure of the objective world. 3D reconstruction has applications in 3D modeling and mapping, robotics, medical imaging, surveillance, tracking and navigation. At the same time, depth acquisition has broad application prospects in various industries such as reverse engineering, games and entertainment. [0003] 3D reconstruction using computer vision is a complete process, including camera calibration, feature matching and reconstruction. The purpose of 3D reconstruction is to restore the complete structural ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06T7/73G06N3/04G06N3/08G06N20/00
CPCG06T3/4007G06T3/4046G06T7/73G06N3/084G06N20/00G06T2207/10016G06T2207/20081G06T2207/20084G06T2207/30244G06N3/044G06N3/045
Inventor 魏东于璟玮何雪刘涵
Owner SHENYANG POLYTECHNIC UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products