Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

18802 results about "Point cloud" patented technology

A point cloud is a set of data points in space. Point clouds are generally produced by 3D scanners, which measure many points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D CAD models for manufactured parts, for metrology and quality inspection, and for a multitude of visualization, animation, rendering and mass customization applications.

3D imaging system

The present invention provides a system (method and apparatus) for creating photorealistic 3D models of environments and/or objects from a plurality of stereo images obtained from a mobile stereo camera and optional monocular cameras. The cameras may be handheld, mounted on a mobile platform, manipulator or a positioning device. The system automatically detects and tracks features in image sequences and self-references the stereo camera in 6 degrees of freedom by matching the features to a database to track the camera motion, while building the database simultaneously. A motion estimate may be also provided from external sensors and fused with the motion computed from the images. Individual stereo pairs are processed to compute dense 3D data representing the scene and are transformed, using the estimated camera motion, into a common reference and fused together. The resulting 3D data is represented as point clouds, surfaces, or volumes. The present invention also provides a system (method and apparatus) for enhancing 3D models of environments or objects by registering information from additional sensors to improve model fidelity or to augment it with supplementary information by using a light pattern projector. The present invention also provides a system (method and apparatus) for generating photo-realistic 3D models of underground environments such as tunnels, mines, voids and caves, including automatic registration of the 3D models with pre-existing underground maps.
Owner:MACDONALD DETTWILER & ASSOC INC

System and Method for Manipulating Data Having Spatial Co-ordinates

Systems and methods are provided for extracting various features from data having spatial coordinates. The systems and methods may identify and extract data points from a point cloud, where the data points are considered to be part of the ground surface, a building, or a wire (e.g. power lines). Systems and methods are also provided for enhancing a point cloud using external data (e.g. images and other point clouds), and for tracking a moving object by comparing images with a point cloud. An objects database is also provided which can be used to scale point clouds to be of similar size. The objects database can also be used to search for certain objects in a point cloud, as well as recognize unidentified objects in a point cloud.
Owner:REELER EDMUND COCHRANE +8

System and method for three-dimensional alignment of objects using machine vision

This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
Owner:COGNEX CORP

Ladar Point Cloud Compression

Various embodiments are disclosed ladar point cloud compression. For example, a processor can be used to analyze data representative of an environmental scene to intelligently select a subset of range points within a frame to target with ladar pulses via a scanning ladar transmission system. In another example embodiment, a processor can perform ladar point cloud compression by (1) processing data representative of an environmental scene, and (2) based on the processing, selecting a plurality of the range points in the point cloud for retention in a compressed point cloud, the compressed point cloud comprising fewer range points than the generated point cloud.
Owner:AEYE INC

Point cloud compression

A system comprises an encoder configured to compress attribute information and / or spatial for a point cloud and / or a decoder configured to decompress compressed attribute and / or spatial information for the point cloud. To compress the attribute and / or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud.
Owner:APPLE INC

Determining camera motion

ActiveUS20070103460A1Efficiently parameterizedTelevision system detailsImage analysisPoint cloud3d image
Camera motion is determined in a three-dimensional image capture system using a combination of two-dimensional image data and three-dimensional point cloud data available from a stereoscopic, multi-aperture, or similar camera system. More specifically, a rigid transformation of point cloud data between two three-dimensional point clouds may be more efficiently parameterized using point correspondence established between two-dimensional pixels in source images for the three-dimensional point clouds.
Owner:MEDIT CORP

Point Cloud Compression using Prediction and Shape-Adaptive Transforms

A method compresses a point cloud composed of a plurality of points in a three-dimensional (3D) space by first acquiring the point cloud with a sensor, wherein each point is associated with a 3D coordinate and at least one attribute. The point cloud is partitioned into an array of 3D blocks of elements, wherein some of the elements in the 3D blocks have missing points. For each 3D block, attribute values for the 3D block are predicted based on the attribute values of neighboring 3D blocks, resulting in a 3D residual block. A 3D transform is applied to each 3D residual block using locations of occupied elements to produce transform coefficients, wherein the transform coefficients have a magnitude and sign. The transform coefficients are entropy encoded according the magnitudes and sign bits to produce a bitstream.
Owner:MITSUBISHI ELECTRIC RES LAB INC

Method of generating surface defined by boundary of three-dimensional point cloud

Disclosed is a method of generating a three-dimensional (3D) surface defined by a boundary of a 3D point cloud. The method comprises generating density and depth maps from the 3D point cloud, constructing a 2D mesh from the depth and density maps, transforming the 2D mesh into a 3D mesh, and rendering 3D polygons defined by the 3D mesh.
Owner:NVIDIA CORP

Kinect-based robot self-positioning method

The invention discloses a Kinect-based robot self-positioning method. The method includes the following steps that: the RGB image and depth image of an environment are acquired through the Kinect, and the relative motion of a robot is estimated through the information of visual fusion and a physical speedometer, and pose tracking can be realized according to the pose of the robot at a last time point; depth information is converted into three-dimensional point cloud, and a ground surface is extracted from the point cloud, and the height and pitch angle of the Kinect relative to the ground surface are automatically calibrated according to the ground surface, so that the three-dimensional point cloud can be projected to the ground surface, and therefore, two-dimensional point cloud similar to laser data can be obtained, and the two-dimensional point cloud is matched with pre-constructed environment raster map, and thus, accumulated errors in a robot tracking process can be corrected, and the pose of the robot can be estimated accurately. According to the Kinect-based robot self-positioning method of the invention, the Kinect is adopted to replace laser to perform positioning, and therefore, cost is low; image and depth information is fused, so that the method can have high precision; and the method is compatible with a laser map, and the mounting height and pose of the Kinect are not required to be calibrated in advance, and therefore, the method is convenient to use, and requirements for autonomous positioning and navigation of the robot can be satisfied.
Owner:HANGZHOU JIAZHI TECH CO LTD

Parallax optimization algorithm-based binocular stereo vision automatic measurement method

InactiveCN103868460AAccurate and automatic acquisitionComplete 3D point cloud informationImage analysisUsing optical meansBinocular stereoNon targeted
The invention discloses a parallax optimization algorithm-based binocular stereo vision automatic measurement method. The method comprises the steps of 1, obtaining a corrected binocular view; 2, matching by using a stereo matching algorithm and taking a left view as a base map to obtain a preliminary disparity map; 3, for the corrected left view, enabling a target object area to be a colorized master map and other non-target areas to be wholly black; 4, acquiring a complete disparity map of the target object area according to the target object area; 5, for the complete disparity map, obtaining a three-dimensional point cloud according to a projection model; 6, performing coordinate reprojection on the three-dimensional point cloud to compound a coordinate related pixel map; 7, using a morphology method to automatically measure the length and width of a target object. By adopting the method, a binocular measuring operation process is simplified, the influence of specular reflection, foreshortening, perspective distortion, low textures and repeated textures on a smooth surface is reduced, automatic and intelligent measuring is realized, the application range of binocular measuring is widened, and technical support is provided for subsequent robot binocular vision.
Owner:GUILIN UNIV OF ELECTRONIC TECH

Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system

The invention discloses a binocular stereo vision based intelligent three-dimensional human face rebuilding method and a system; the method comprises: preprocessing operations including image normalization, brightness normalization and image correction are carried out to a human face image; a human face area in the human face image which is preprocessed is obtained and human face characteristic points are extracted; the object is rebuilt by projection matrix, so as to obtain internal and external parameters of a vidicon; based on the human face characteristic points, gray level cross-correlation matching operators are expanded to color information, and a parallax image generated by stereo matching is calculated according to information including polar line restraining, human face area restraining and human face geometric conditions; a three-dimensional coordinate of a human face spatial hashing point cloud is calculated according to the vidicon calibration result and the parallax image generated by stereo matching, so as to generate a three-dimensional human face model. By adopting the steps, more smooth and vivid three-dimensional human face model is rebuilt in the invention.
Owner:BEIJING JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products