Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Monocular visual focusing stack acquisition and scene reconstruction method

A scene reconstruction and monocular vision technology, applied in TV, color TV, 3D modeling, etc., can solve problems such as low accuracy, high hardware requirements, and complex algorithms, and achieve wide application, small constraints, and small size Effect

Active Publication Date: 2018-05-18
BEIJING INFORMATION SCI & TECH UNIV
View PDF2 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the depth camera system, the acquisition of depth mostly relies on the time-of-flight principle to estimate the depth, but due to the large environmental factors, its accuracy is low
In addition, there is also a depth sensing device based on binocular stereo matching, which has high requirements for hardware, and the algorithm is complex, and the calculation time is long, so it is not suitable for wide use.
For depth estimation software, the accuracy rate is often low in the process of processing data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monocular visual focusing stack acquisition and scene reconstruction method
  • Monocular visual focusing stack acquisition and scene reconstruction method
  • Monocular visual focusing stack acquisition and scene reconstruction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] In the drawings, the same or similar reference numerals are used to indicate the same or similar elements or elements with the same or similar functions. The embodiments of the present invention will be described in detail below in conjunction with the drawings.

[0027] In the description of the present invention, the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", " The orientation or positional relationship indicated by "bottom", "inner", "outer", etc. is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying the pointed device or element It must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation on the protection scope of the present invention.

[0028] Such as figure 1 As ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monocular visual focusing stack acquisition and scene reconstruction method. The method comprises the following steps of: controlling rotation of a prime lens through an electric control rotator, and acquiring focusing stack data; in the rotation process of the prime lens, fixing a detector and synchronously translating the prime lens along an optical axis of a camera; establishing a corresponding relationship between a rotation angle of the electric control rotator and an imaging surface depth according to position adjustment of the prime lens; establishing a corresponding relationship between the rotation angle of the electric control rotator and a focusing object surface depth by combining an object image relationship of lens imaging according to the corresponding relationship between the rotation angle of the electric control rotator and the focusing object surface depth; and functionally calculating a depth of each object point by utilizing maximized focusing measurement according to the corresponding relationship between the rotation angle of the electric control rotator and the focusing object surface depth, and outputting a scene depth map and a full focusing map so as to reconstruct a three-dimensional scene. According to the method, the requirements for three-dimensional scene reconstruction, image depth information and full focusing under field of view (FOV) of the cameras can be satisfied, and depth images ad full focusing images can be generated.

Description

Technical field [0001] The invention relates to the field of computer vision and digital image processing, and is a monocular vision focusing stack collection and scene reconstruction method. Background technique [0002] Image depth refers to the distance between the real scene and the camera imaging plane. Depth information has also been widely used in sensors, such as image recognition and processing, intelligent robots, virtual and reality technologies and so on. [0003] At present, there are two main ways to obtain image depth information, namely, depth camera shooting and depth estimation software estimation. In the depth camera system, the depth acquisition is mostly based on the time-of-flight principle to estimate the depth, but its accuracy is low due to greater environmental influence factors. In addition, there are depth sensing devices based on binocular stereo matching, which have high hardware requirements, complicated algorithms, and long calculation times, which...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00H04N13/207H04N13/271H04N5/232
CPCG06T17/00H04N23/60
Inventor 刘畅赵旭亢新凯邱钧
Owner BEIJING INFORMATION SCI & TECH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products