Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

4462 results about "Structured light" patented technology

Structured light is the process of projecting a known pattern (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners.

System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

InactiveUS7440590B1More detailed and large depth mappingLimited bandwidthProjectorsCathode-ray tube indicatorsInteraction interfaceTelecollaboration
A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face—or other animal feature or inanimate object—recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.
Owner:UNIV OF KENTUCKY RES FOUND

Apparatus and method for determining orientation parameters of an elongate object

An apparatus and method employing principles of stereo vision for determining one or more orientation parameters and especially the second and third Euler angles θ, ψ of an elongate object whose tip is contacting a surface at a contact point. The apparatus has a projector mounted on the elongate object for illuminating the surface with a probe radiation in a known pattern from a first point of view and a detector mounted on the elongate object for detecting a scattered portion of the probe radiation returning from the surface to the elongate object from a second point of view. The orientation parameters are determined from a difference between the projected and detected probe radiation such as the difference between the shape of the feature produced by the projected probe radiation and the shape of the feature detected by the detector. The pattern of probe radiation is chosen to provide information for determination of the one or more orientation parameters and can include asymmetric patterns such as lines, ellipses, rectangles, polygons or the symmetric cases including circles, squares and regular polygons. To produce the patterns the projector can use a scanning arrangement or a structured light optic such as a holographic, diffractive, refractive or reflective element and any combinations thereof. The apparatus is suitable for determining the orientation of a jotting implement such as a pen, pencil or stylus.
Owner:ELECTRONICS SCRIPTING PRODS

Binocular stereo vision three-dimensional measurement method based on line structured light scanning

The invention discloses a binocular stereo vision three-dimensional measurement method based on line structured light scanning, which comprises the steps of performing stereo calibration on binocularindustrial cameras, projecting laser light bars by using a line laser, respectively acquiring left and right laser light bar images, extracting light bar center coordinates with sub-pixel accuracy based on a Hessian matrix method, performing light bar matching according to an epipolar constraint principle, and calculating a laser plane equation; secondly, acquiring a line laser scanning image of aworkpiece to be measured, extracting coordinates of the image of the workpiece to be measured, calculating world coordinates of the workpiece to be measured by combining binocular camera calibrationparameters and the laser plane equation, and recovering the three-dimensional surface topography of the workpiece to be measured. Compared with a common three-dimensional measurement system combininga monocular camera and line structured light, the binocular stereo vision three-dimensional measurement method avoids complicated laser plane calibration. Compared with the traditional stereo vision method, the binocular stereo vision three-dimensional measurement method reduces the difficulty of stereo matching in binocular stereo vision while ensuring the measurement accuracy, and improves the robustness and the usability of a visual three-dimensional measurement system.
Owner:CHANGSHA XIANGJI HAIDUN TECH CO LTD

System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

InactiveUS20080279446A1Reduce system costInformation can be reducedUsing optical meansAquisition of 3D object measurementsInteraction interfaceTelecollaboration
A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map / mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map / mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face—or other animal feature or inanimate object—recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing / telecollaboration.
Owner:UNIV OF KENTUCKY RES FOUND
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products