Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for acquiring dynamic scene depth based on fringe structure light

A dynamic scene and acquisition method technology, applied in the field of image processing, can solve problems such as inaccurate stripe boundary positioning, achieve the effect of reducing computational complexity and improving spatial resolution

Inactive Publication Date: 2013-11-20
XIDIAN UNIV
View PDF5 Cites 45 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose a dynamic scene depth acquisition method based on stripe structured light, so as to avoid color decoding errors and stripe boundary positioning without increasing equipment complexity and calculation complexity Inaccurate problems, improve spatial resolution, and obtain high-precision dynamic scene depth values

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for acquiring dynamic scene depth based on fringe structure light
  • Method for acquiring dynamic scene depth based on fringe structure light
  • Method for acquiring dynamic scene depth based on fringe structure light

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention is an improvement to the existing dynamic scene depth acquisition algorithm of space coded structured light, does not increase the complexity of equipment, improves the spatial resolution of the acquired depth, and increases the accuracy of the acquired depth.

[0035] refer to figure 1 , the present invention is a dynamic scene depth acquisition method based on striped structured light, the steps are as follows:

[0036] Step 1, design black and white stripe template P with different widths encoded by 2-element 3-time De Bruijn sequence.

[0037] (1a) Set a pair of black and white stripes as the basic unit, and the black stripes are on the left, and the white stripes are on the right, and the sum of the widths of the black and white stripes is set as a constant L, and the ratio of the width of the white stripes to L is defined as the duty cycle, when When the duty ratio of the white stripes is 2 / 6, the code of the pair of black and white stripes ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for acquiring the dynamic scene depth based on fringe structure light, which mainly solves a problem that the prior art is low in depth precision calculation and depth spatial resolution. The implementation comprises the steps of: designing a black and white fringe template P of De Bruijn sequence coding, projecting the black and white fringe template to a three-dimensional scene by using a projector, and recording a deformed fringe image U by using a camera; determining sequence numbers of black and white fringes in the deformed fringe image U by using the geometrical relationship between the projector and the camera and a De Bruijn sequence; solving the phase difference between the deformed fringe image U and the black and white fringe template P, and solving coordinates of corresponding matching points of pixel points in the deformed fringe image in the black and white fringe template P by using the sequence numbers of the black and white fringes and the phase difference; and solving the depth of each pixel point in the deformed fringe image U by using the coordinates of the matching points and a line-plane intersection geometrical relationship. The method disclosed by the invention has the advantages of low calculation complexity, high depth spatial resolution and high depth precision, and can be applied to precise three-dimensional reconstruction of dynamic scenes.

Description

technical field [0001] The invention belongs to the technical field of image processing, relates to the acquisition of dynamic scene depth, and can be used for three-dimensional reconstruction or target recognition. Background technique [0002] With the rapid development of information technology, the measurement of depth information of 3D objects is essential in many application fields, such as industrial automation, mobile robots, human-computer interaction and surgery. At present, three-dimensional measurement methods mainly include passive measurement method and active measurement method. [0003] Passive measurement methods include binocular stereo vision method, defocus method and shadow method. The main task of the binocular stereo vision method is to obtain the image parallax obtained by the two cameras, and then use the triangulation method to obtain the depth. This method has high requirements for the synchronization of the cameras, and is eliminated because the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06T9/00G06T17/00
Inventor 石光明李甫李芹齐飞石悦鑫高山
Owner XIDIAN UNIV
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More