Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

393 results about "Point cloud segmentation" patented technology

Point cloud segmentation can be straightforward as long as the assumptions can be mathematically modelled. For example, a flat floor can simply be extracted by using an elevation threshold. However, it is not always possible to make such simple assumptions, and the more assumptions that are made, the less generally applicable the approach becomes.

Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

The invention provides a binocular vision obstacle detection method based on three-dimensional point cloud segmentation. The method comprises the steps of synchronously collecting two camera images of the same specification, conducting calibration and correction on a binocular camera, and calculating a three-dimensional point cloud segmentation threshold value; using a three-dimensional matching algorithm and three-dimensional reconstruction calculation for obtaining a three-dimensional point cloud, and conducting image segmentation on a reference map to obtain image blocks; automatically detecting the height of a road surface of the three-dimensional point cloud, and utilizing the three-dimensional point cloud segmentation threshold value for conducting segmentation to obtain a road surface point cloud, obstacle point clouds at different positions and unknown region point clouds; utilizing the point clouds obtained through segmentation for being combined with the segmented image blocks, determining the correctness of obstacles and the road surface, and determining position ranges of the obstacles, the road surface and unknown regions. According to the binocular vision obstacle detection method, the camera and the height of the road surface can be still detected under the complex environment, the three-dimensional segmentation threshold value is automatically estimated, the obstacle point clouds, the road surface point cloud and the unknown region point clouds can be obtained through segmentation, the color image segmentation technology is ended, color information is integrated, correctness of the obstacles and the road surface is determined, the position ranges of the obstacles, the road surface and the unknown regions are determined, the high-robustness obstacle detection is achieved, and the binocular vision obstacle detection method has higher reliability and practicability.
Owner:GUILIN UNIV OF ELECTRONIC TECH +1

Robot grasp pose estimation method based on object recognition depth learning model

The invention discloses a robot grasping pose estimation method based on an object recognition depth learning model, which relates to the technical field of computer vision. The method is based on anRGBD camera and depth learning. The method comprises the following steps: S1, camera parameter calibration and hand-eye calibration being carried out; S2, training an object detection object model; S3, establishing a three-dimensional point cloud template library of the target object; 4, identifying the types and position of each article in the area to be grasped; 5, fusing two-dimensional and three-dimensional vision information and obtaining a point cloud of a specific target object; 6, completing the pose estimation of the target object; S7: adopting an error avoidance algorithm based on sample accumulation to avoid errors; S8: Steps S4 to S7 being continuously repeated by the vision system in the process of moving the robot end toward the target object, so as to realize iterative optimization of the pose estimation of the target object. The algorithm of the invention utilizes a target detection YOLO model to carry out early-stage fast target detection, reduces the calculation amount of three-dimensional point cloud segmentation and matching, and improves the operation efficiency and accuracy.
Owner:SHANGHAI JIEKA ROBOT TECH CO LTD

Two-dimensional laser radar point cloud data processing method and dynamic robot posture calibration method

The invention discloses a two-dimensional laser radar point cloud data processing method. Environmental data is collected through laser radar scanning, point cloud data pre-processing is performed, apolar coordinate system with laser radar being a coordinate point is converted into a rectangular coordinate system that is used for posture calibration of a robot, environmental information is effectively detected and recognized through region segmentation, feature extraction, point cloud segmentation, two-dimensional laser radar and a camera fusion detection algorithm, and three-dimensional coordinate fusion is conducted through a vent needle model of the camera and the laser radar, so that the robot calibrates the posture according to a deviation. The invention also discloses a dynamic robot posture calibration method, wherein a camera is adopted as an auxiliary detection sensor and obtains the real-time posture state of the robot together with laser radar, the adjustment amount of a calibration system is obtained by comparing the real-time posture state with a desired posture state, calibration is continuously carried out until the deviation between the robot posture detected by the laser radar and the desired posture is less than a certain threshold value, and thus a whole closed-loop posture calibration process is completed.
Owner:NANCHANG UNIV

Network-connected vehicle positioning method based on 3D laser radar and V2X

The invention discloses a network-connected vehicle positioning method based on a 3D laser radar and V2X. The method comprises the following steps of using a laser radar installed on a network-connected vehicle to scan an environment around a network-connected vehicle so as to obtain radar data; preprocessing the radar data, acquiring preprocessed laser point cloud data, converting the point clouddata to a network-connected vehicle coordinate system, and then, according to the density change characteristic of a radar point in the point cloud data, using a raster map to carry out point cloud segmentation so as to realize the separation of a pavement and barriers; carrying out the clustering of the barriers, acquiring a radar point set corresponding to each barrier, and then obtaining the contour of the barrier, then carrying out linear fitting to obtain the shape of the barrier, and combining the size of the barrier to realize barrier identification; and finally, realizing the exchangeof barrier information among other network-connected vehicles through a V2X module. In the invention, a problem that an existing vehicle positioning method has low accuracy and poor reliability is solved, and simultaneously problems that a requirement to the speed of a mobile communication network is high and cost is high are alleviated.
Owner:CHANGAN UNIV

Airborne laser point cloud classification method based on high-order conditional random field

The invention provides an airborne laser point cloud classification method based on a high-order conditional random field. The airborne laser point cloud classification method specifically comprises the following steps: (1) point cloud segmentation based on DBSCAN clustering; (2) point cloud over-segmentation based on the K-means cluster; (3) construction of a point set adjacency relation based onthe Meanshift clustering; and (4) construction of a point cloud classification method of a high-order conditional random field based on the multi-level point set. The method has the advantages that:(1) a multi-layer clustering point set structure construction method is provided, and a connection relation between point sets is constructed by introducing a Meanshift point set cluster constrained by category labels, so that the categories of the point sets can be classified more accurately; (2) a multi-level point set of the non-linear point cloud number can be adaptively constructed, and information such as the structure and the shape of a point cloud target can be more completely represented; and (3) a CRF model is constructed by taking the point set as a first-order item, and higher efficiency and a classification effect are achieved, so that a higher framework is integrated, and a better effect is obtained.
Owner:NANJING FORESTRY UNIV

Robot grinding system with three-dimensional vision and control method thereof

PendingCN109483369ASolve the problem of automatic grindingRealize flexible grinding workGrinding feed controlCharacter and pattern recognitionEngineering3d camera
The invention discloses a robot grinding system with three-dimensional vision and a control method thereof. The robot grinding system comprises a workpiece fixing device, a grinding robot, an opticaldevice, pneumatic grinding equipment, a power supply and a control device. The control method of the robot grinding system with the three-dimensional vision comprises the steps that the grinding robotdrives the optical device to horizontally scan a curved surface workpiece; a laser image collected by a 3D camera is transmitted to the control device in the power supply and the control device; thecontrol device performs the three-dimensional point cloud processing to the image obtained by the 3D camera by using a corresponding function, and the preliminary filtering denoising is carried out; the point cloud segmentation and a slice algorithm are used for performing the further processing on the obtained three-dimensional point cloud; and processing results are analyzed and sent to the control device according to rules, and the robot is driven to perform the wholly grinding operation on the curved surface workpiece in the correct attitude. The robot grinding system with three-dimensional vision and the control method thereof are mainly used for automatic grinding in the field of curved surface structure manufacturing, and the production efficiency and production quality are improvedby improving the automation degree of the industrial grinding field.
Owner:716TH RES INST OF CHINA SHIPBUILDING INDAL CORP +1

Track center line automatic detection method based on vehicle-mounted mobile laser point cloud

The invention discloses a track center line automatic detection method based on vehicle-mounted mobile laser point cloud. The method comprises the following steps: S1, point cloud segmentation of a roadbed and a track; s2, track point cloud classification; s3, automatic track center line detection: according to a cross section design drawing of a standard steel rail, automatically reconstructing astandard three-dimensional geometric model of the steel rail in a segmented manner, carrying out iterative registration on the standard steel rail model and the segmented steel rail point cloud in the S2, and carrying out automatic reconstruction on a segmented three-dimensional model of the steel rail; calculating track geometric parameters by utilizing the reconstructed track three-dimensionalgeometric model, calculating track center line point locations in a segmented manner on the basis of the reference track, and sequentially connecting the track center line point locations extracted inthe segmented manner to form a line center line. According to the method, roadbed and rail point clouds can be rapidly segmented, the area of rail point cloud classification search is reduced, automatic classification of laser point clouds on the surfaces of left and right steel rails is realized, and the precision requirement of existing rail transit midline detection can be met.
Owner:CHINA RAILWAY DESIGN GRP CO LTD

Visual 3D taking and placing method and system based on cooperative robot

The invention provides a visual 3D taking and placing method and system based on a cooperative robot. The method comprises the steps that internal and external parameters of a camera of a binocular structured light three-dimensional scanner are calibrated; the hands and eyes of the cooperative robot are calibrated, and a calibration result matrix is obtained; a three-dimensional digital model of to-be-taken-and-placed target objects is collected; the calibrated binocular structured light three-dimensional scanner is adopted to obtain point cloud data of the to-be-taken-and-placed target objects which are stacked in a scattered mode, and the point cloud is segmented to obtain scene point clouds of the multiple to-be-taken-and-placed target objects; the to-be-taken-and-placed target object with the highest grabbing success rate is selected as a grabbing target according to the scene point clouds of the multiple to-be-taken-and-placed target objects; the three-dimensional digital model ofthe grabbing target and scene point pair features are registered, pre-defined taking and placing pose points are registered into a scene, and a registered pose estimation result is obtained and serves as a grabbing pose of the grabbing target; and a preliminary grabbing path track of the cooperative robot is planned. The target object can be accurately recognized, and the grabbing positioning precision is high.
Owner:新拓三维技术(深圳)有限公司

Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction

The invention discloses a synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction, and belongs to the technical field of robot autonomous navigation and computer graphics. According to the algorithm, inter-frame matching is carried out on feature points extracted from three-dimensional point cloud to obtain relative pose transformation ofthe robot, meanwhile, the obtained pose is stored at the rear end in a graph form, and then the point cloud is recorded based on the pose to form a map; the point cloud is fragemented and stored by using a point cloud segmentation and description algorithm, and the point cloud fragments are matched by using a random forest algorithm to form a closed-loop constraint condition; and finally, the historical poses and the map are corrected through a graph optimization algorithm to realize synchronous positioning and composition of the robot. According to the method, while the local positioning precision is ensured, the historical poses and maps are stored and corrected, accumulated errors in an outdoor long-distance environment are effectively reduced, and then synchronous positioning and composition of the robot with good global consistency are achieved.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Improved Euclidean clustering-based scattered workpiece point cloud segmentation method

The invention provides an improved Euclidean clustering-based scattered workpiece point cloud segmentation method and relates to the field of point cloud segmentation. According to the method, a corresponding scene segmentation scheme is proposed in view of inherent disorder and randomness of scattered workpiece point clouds. The method comprises the specific steps of preprocessing the point clouds: removing background points by using an RANSAC method, and removing outliers by using an iterative radius filtering method. A parameter selection basis is provided for online segmentation by adopting an information registration method for offline template point clouds, thereby increasing the online segmentation speed; a thought of removing edge points firstly, then performing cluster segmentation and finally supplementing the edge points is proposed, so that the phenomenon of insufficient segmentation or over-segmentation in a clustering process is avoided; during the cluster segmentation, an adaptive neighborhood search radius-based clustering method is proposed, so that the segmentation speed is greatly increased; and surface features of workpieces are reserved in edge point supplementation, so that subsequent attitude locating accuracy can be improved.
Owner:WUXI XINJIE ELECTRICAL

Ground point cloud segmentation method based on three-dimensional laser radar

The invention discloses a ground point cloud segmentation method based on a three-dimensional laser radar. The method comprises the following steps that: S100: obtaining ground point cloud data, carrying out scanning on the basis of the ground three-dimensional laser radar to obtain spatial information data, and meanwhile, combining with a GPS (Global Positioning System) to obtain ground data; S200: carrying out data processing, establishing a rectangular coordinate system, converting the coordinate of the obtained ground data into the rectangular coordinate system, and meanwhile, adopting equal-thickness layering according to contour line and contour interval properties; S300: carrying out ground point cloud information segmentation, and classifying the ground laser point cloud information obtained by scanning; S400: carrying out denoising processing, classifying noise data, and independently carrying out noise processing on the noise data; and S500: carrying out image segmentation display, and directly extracting building elevation information through a gridding way. Through a way of combining positioning with laser scanning, more comprehensive ground point cloud data is obtained, meanwhile, the accuracy of the point cloud data is improved through the ways of classified segmentation and step-by-step denoising, and convenience is brought to subsequent modeling and simulation.
Owner:WUHU HANGFEI SCI & TECH

Method for quickly constructing three-dimensional scene of underground fully mechanized coal mining face

The invention discloses a method for quickly constructing a three-dimensional scene of an underground fully mechanized mining face, which comprises the following steps of: (1) mounting a laser radar,an inertial measurement unit and a speedometer on a movable holder, enabling the movable holder to quickly move on a flexible track, and mounting the flexible track on a track support frame on the outer side of a ledge of a scraper conveyor; (2) acquiring measurement data of the IMU, the odometer and the laser radar in real time and fusing the measurement data to obtain laser scanning three-dimensional point cloud vector data of the fully mechanized coal mining face; (3) respectively performing point cloud denoising, point cloud correction and point cloud segmentation processing on the three-dimensional point cloud vector data; (4) generating an irregular triangulation network of the three-dimensional point cloud vector data; and (5) extracting and separating the contour features of the fully mechanized mining equipment, and quickly constructing a fully mechanized mining face three-dimensional scene model. According to the method, the three-dimensional scene information of the fully mechanized coal mining face can be rapidly constructed, and the fully mechanized coal mining face production full view can be rapidly displayed in a simple and visual mode.
Owner:CHINA UNIV OF MINING & TECH

Point cloud segmentation method and device, computer readable storage medium and terminal

ActiveCN109409437AEfficient removalDoes not affect data characteristicsImage enhancementImage analysisOutlierComputer terminal
The invention belongs to the technical field of point cloud data segmentation, and particularly relates to a point cloud segmentation method and device, a computer readable storage medium and a terminal. The invention discloses a point cloud segmentation method. The method comprises the following steps: carrying out filtering processing on collected point cloud data, filtering the point cloud datato obtain filtered point cloud data; the discrete point cloud after the simplification processing is obtained; calculating curvatures of all discrete point clouds; acquiring a minimum value in the curvature; taking the discrete point cloud corresponding to the minimum value as the current seed point cloud; and by taking a preset radius as a size, obtaining all adjacent point clouds in a preset radius circle neighborhood of the current seed point cloud, and calculating a normal included angle between the current seed point cloud and each adjacent point cloud, a smooth threshold corresponding to the normal included angle, curvature values of all adjacent point clouds and a curvature threshold corresponding to the curvature values. According to the method, some outliers far away from the main body can be effectively removed, and the data processing speed is greatly increased on the premise that point cloud data characteristics are not affected.
Owner:ANHUI AGRICULTURAL UNIVERSITY

3D point cloud semantic segmentation method under bird's-eye view coding view angle

The invention discloses a 3D point cloud semantic segmentation method under a bird's-eye view coding view angle. The method is applied to an input 3D point cloud. The method comprises: converting a voxel-based coding mode into a view angle of a bird's-eye view; extracting a feature of each voxel through a simplified Point Net network; converting the feature map into a feature map which can be directly processed by utilizing a 2D convolutional network; and processing the encoded feature map by using a full convolutional network structure composed of residual modules reconstructed through decomposition convolution and hole convolution, so that an end-to-end pixel-level semantic segmentation result is obtained, point cloud network semantic segmentation can be accelerated, and a point cloud segmentation task in a high-precision real-time large scene can be achieved under the condition that hardware is limited. The method can be directly used for tasks of robots, unmanned driving, disordered grabbing and the like, and due to the design of the method on a coding mode and a network structure, the system overhead is lower while high-precision point cloud semantic segmentation is achieved,and the method is more suitable for hardware-limited scenes of robots, unmanned driving and the like.
Owner:XI AN JIAOTONG UNIV

Three-dimensional point cloud semantic segmentation method and device, equipment and medium

The invention relates to the technical field of artificial intelligence, and discloses a three-dimensional point cloud semantic segmentation method, device and equipment and a medium, and the method comprises the steps: carrying out the point cloud division and quantitative discrimination of to-be-predicted three-dimensional point cloud data through employing a preset space cell, and obtaining target point cloud data; inputting the target point cloud data into a point cloud semantic category prediction model to perform semantic category probability prediction to obtain a point cloud semantic category probability prediction value of the target point cloud data, wherein the point cloud semantic category prediction model is a model obtained by training based on a Point SIFT neural network module and a Point Net + + neural network; and determining a target semantic category of each point in the target point cloud data according to the point cloud semantic category probability prediction value. According to the method, rapid and accurate logic division is carried out on the point cloud of the complex large-scale target object, the recognition precision of point cloud segmentation is improved, fine features of the complex target object can be well processed, and the accuracy of semantic category prediction is improved.
Owner:PING AN TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products