Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

390results about "Aquisition of 3D object measurements" patented technology

System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

InactiveUS7440590B1More detailed and large depth mappingLimited bandwidthProjectorsCathode-ray tube indicatorsInteraction interfaceTelecollaboration
A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face—or other animal feature or inanimate object—recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.
Owner:UNIV OF KENTUCKY RES FOUND

System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

InactiveUS20080279446A1Reduce system costInformation can be reducedUsing optical meansAquisition of 3D object measurementsInteraction interfaceTelecollaboration
A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map / mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map / mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face—or other animal feature or inanimate object—recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing / telecollaboration.
Owner:UNIV OF KENTUCKY RES FOUND

System and method of three-dimensional image capture and modeling

System and method for constructing a 3D model of an object based on a series of silhouette and texture map images. In the exemplary embodiment an object is placed on a rotating turntable and a camera, which is stationary, captures images of the object as it rotates on the turntable. In one pass, the system captures a number of photographic images that will be processed into image silhouettes. In a second pass, the system gathers texture data. After a calibration procedure (used to determine the camera's focal length and the turntable's axis of rotation), a silhouette processing module determines a set of two-dimensional polygon shapes (silhouette contour polygons) that describe the contours of the object. The system uses the silhouette contour polygons to create a 3D polygonal mesh model of the object. The system determines the shape of the 3D model analytically-by finding the areas of intersection between the edges of the model faces and the edges of the silhouette contour polygons. The system creates an initial, (rough) model of the 3D object from one of the silhouette contour polygons, then executes an overlaying procedure to process each of the remaining silhouette contour polygons. In the overlaying process, the system processes the silhouette contour polygons collected from each silhouette image, projecting each face of the (rough) 3D model onto the image plane of the silhouette contour polygons. The overlaying of each face of the (rough) 3D model onto the 2D plane of the silhouette contour polygons enables the present invention to determine those areas that are extraneous and should be removed from the (rough) 3D model. As the system processes the silhouette contour polygons in each image it removes the extraneous spaces from the initial object model and creates new faces to patch “holes.” The polygonal mesh model, once completed, can be transformed into a triangulated mesh model. In a subsequent step, the system uses a deterministic procedure to map texture from the texture images onto the triangles of the 3D mesh model, locating that area in the various texture map images that is “best” for each mesh triangle.
Owner:SIZMEK TECH

System and method for 3D imaging using structured light illumination

InactiveUS8224064B1Acquisition speed is fastMore robust to extremely worn ridges of the fingersImage enhancementImage analysisRandom noiseComputer science
A biometrics system captures and processes a handprint image using a structured light illumination to create a 2D representation equivalent of a rolled inked handprint. The biometrics system includes an enclosure with a scan volume for placement of the hand. A reference plane with a backdrop pattern forms one side of the scan volume. The backdrop pattern is preferably a random noise pattern and the coordinates of the backdrop pattern are predetermined at system provisioning. The biometrics system further includes at least one projection unit for projecting a structured light pattern onto a hand positioned in the scan volume on or in front of the backdrop pattern and at least two cameras for capturing a plurality of images of the hand, wherein each of the plurality of images includes at least a portion of the hand and the backdrop pattern. A processing unit calculates 3D coordinates of the hand from the plurality of images using the predetermined coordinates of the backdrop pattern to align the plurality of images and mapping the 3D coordinates to a 2D flat surface to create a 2D representation equivalent of a rolled inked handprint. The processing unit can also adjust calibration parameters for each hand scan from calculating coordinates of the portion of backdrop pattern in the at least one image and comparing with the predetermined coordinates of the backdrop pattern.
Owner:UNIV OF KENTUCKY RES FOUND

System and method for authentication of a workpiece using three dimensional shape recovery

A workpiece authentication system uses shape recovery techniques to extract explicit three dimensional (“3-D”) features of the surface geometry of the designated portion of a workpiece from images produced using different lighting conditions. The system then bases authentication on the 3-D surface features. The system recovers surface normals, or equivalently gradients, for selected locations within a designated portion of the workpiece from multiple enrollment images produced under different illumination conditions. The system then encodes the surface normal information into authentication indicia that is placed on the workpiece and/or stores the surface normals or related information. Thereafter, the system determines that a given workpiece is authentic if the surface normals recovered from various verification images correspond to the stored surface normal information or the surface normal information encoded into the indicia. Alternatively, the system may use the surface normals to predict what an image should contain when the workpiece is subjected to a particular lighting condition. The system then determines that the workpiece is authentic if the predicted image and the image produced using the workpiece correspond. The system may instead encode brightness patterns associated with one or more enrollment images into the indicia. The system then recovers surface normals from images produced during verification operations, predicts what the brightness image should contain and compares the enrollment image to the prediction.
Owner:ESCHER GROUP

Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects

A method and system are provided for constructing a virtual three-dimensional model of an object using a data processing system, and at least one machine-readable memory accessible to said data processing system. A set of at least two digital three-dimensional frames of portions of the object are obtained from a source, such as a computing system coupled to an optical or laser scanner, CT scanner, Magnetic Resonance Tomography scanner or other source. The at least two frames comprise a set of point coordinates in a three dimensional coordinate system providing differing information of the surface of the object. The frames provide a substantial overlap of the represented portions of the surface of the object, but do not coincide exactly for example due to movement of the scanning device relative to the object between the generation of the frame. Data representing the set of frames are stored in the memory. The data processing system processes the data representing the set of frames with said data processing system so as to register the frames relative to each other to thereby produce a three-dimensional virtual representation of the portion of the surface of the object covered by said set of frames. The registration is performed without using pre-knowledge about the spatial relationship between the frames. The three-dimensional virtual model or representation is substantially consistent with all of the frames.
Owner:ORAMETRIX

Terrain modeling method and system fusing geometric characteristics and mechanical characteristics

The invention provides a terrain modeling method and system integrating the geometric characteristics and the mechanical characteristics, and relates to the technical field of environment modeling. The method comprises the following steps of obtaining a color image and a depth image of a detection area, carrying out terrain semantic segmentation on the color image, and fusing a semantic segmentation result and the depth information contained in the depth image at the same moment to generate a semantic point cloud; mapping the semantic point cloud into a grid map under a map coordinate system to generate the corresponding grids, and updating the elevation values and the semantic information in the semantic point cloud to the corresponding grids; and performing ground mechanical property calculation according to the semantic information, updating a calculation result to the corresponding grid, and generating a terrain model. According to the method, the mechanical property parameters are added into topographic factors, and the topographic characterization is innovatively carried out from two dimensions of geometric properties and mechanical properties. The ground pressure-bearing characteristic and the shear characteristic of a non-contact area are deduced in advance in a visual perception mode, and the perception range is expanded.
Owner:HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products