Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

534 results about "Three dimensional vision" patented technology

Man-machine interactive manipulator control system and method based on binocular vision

The invention discloses a man-machine interactive manipulator control system and method based on binocular vision. The man-machine interactive manipulator control system is composed of a real-time image collecting device, a laser guiding device, a programmable controller and a driving device. The programmable controller is composed of a binocular three-dimensional vision module, a three-dimensional coordinate system transformation module, an inverse manipulator joint angle module and a control module. Color characteristics in a binocular image are extracted through the real-time image collecting device to be used as a signal source for controlling a manipulator, and three-dimensional information of red characteristic laser points in a view real-time image is obtained through transformation and calculation of the binocular three-dimensional vision system and a three-dimensional coordinate system and used for controlling the manipulator to conduct man-machine interactive object tracking operation. The control system and method can effectively conduct real-time tracking and extracting of a moving target object and is wide in application fields such as intelligent artificial limb installing, explosive-handling robots, manipulators helping the old and the disabled and the like.
Owner:SHANDONG UNIV OF SCI & TECH

Robot grasp pose estimation method based on object recognition depth learning model

The invention discloses a robot grasping pose estimation method based on an object recognition depth learning model, which relates to the technical field of computer vision. The method is based on anRGBD camera and depth learning. The method comprises the following steps: S1, camera parameter calibration and hand-eye calibration being carried out; S2, training an object detection object model; S3, establishing a three-dimensional point cloud template library of the target object; 4, identifying the types and position of each article in the area to be grasped; 5, fusing two-dimensional and three-dimensional vision information and obtaining a point cloud of a specific target object; 6, completing the pose estimation of the target object; S7: adopting an error avoidance algorithm based on sample accumulation to avoid errors; S8: Steps S4 to S7 being continuously repeated by the vision system in the process of moving the robot end toward the target object, so as to realize iterative optimization of the pose estimation of the target object. The algorithm of the invention utilizes a target detection YOLO model to carry out early-stage fast target detection, reduces the calculation amount of three-dimensional point cloud segmentation and matching, and improves the operation efficiency and accuracy.
Owner:SHANGHAI JIEKA ROBOT TECH CO LTD

Vehicle type accurate classification system and method based on real-time double-line video stream

ActiveCN103794056AMeet the technical requirements of automatic classification and detectionRoad vehicles traffic controlCharacter and pattern recognitionPhysical spaceThree dimensional measurement
The invention discloses a vehicle type accurate classification system and method based on real-time double-line video stream. The system comprises a vehicle body scanning camera, a high-definition capturing camera and a video vehicle detector. The method comprises the steps that the vehicle body scanning camera and the high-definition capturing camera carry out double-line video collecting on vehicles; after vision field calibration, one-to-one-corresponding logic relation of virtual pixel spaces and real physical spaces is established; the vehicles in a vision field are subjected to target separating; and accurate vehicle physical data are given, model rebuilding and three-dimensional measuring are carried out, and vehicle types are finally judged. The system and method have the advantages that two-line high-definition video stream of the vehicle body scanning camera and the high-definition capturing camera, the embedded double-channel video vehicle type detector and a three-dimensional vision vehicle body model rebuilding and fitting mode are used, various parameters of the vehicles can be provided at the same time, and the vehicles are accurately classified.
Owner:BEIJING SINOITS TECH

Indoor and independent drone navigation method based on three-dimensional vision SLAM

The invention provides an indoor and independent drone navigation method based on three-dimensional vision SLAM. The indoor and independent drone navigation method comprises the steps that an RGB-D camera obtains a colored image and depth data of a drone surrounding environment; a drone operation system extracts characteristic points; the drone operation system judges whether enough characteristicpoints exist or not, if the quantity of the characteristic points is larger than 30, it shows that enough characteristic points exist, the drone attitude calculation process is conducted, or, relocating is conducted; a bundling optimizing method is used for global optimization; an incremental map is built. Drone attitude information is given with only one RGB-D camera, a three-dimensional surrounding environment is rebuilt, the complex process that a monocular camera solves depth information is avoided, and the problems of complexity and robustness of a matching algorithm in a binocular camera are solved; an iterative nearest-point method is combined with a reprojection error algorithm, so that drone attitude estimation is more accurate; a drone is located and navigated and independentlyflies indoors and in other unknown environments, and the problem that locating cannot be conducted when no GPS signal exists is solved.
Owner:江苏中科智能科学技术应用研究院

Method for rapidly constructing three-dimensional architecture scene through real orthophotos

The invention provides a method for rapidly constructing a three-dimensional building scene through an actual projective image, which concretely comprises following steps of: firstly, extracting the top surface of a building through the actual projective image, obtaining a plane vector diagram, and simultaneously obtaining a vector plane coordinate; secondly, respectively carrying out building top surface triangularization process and ground polygonal triangularization process for the plane vector diagram obtained from the first step and generating a building model and a ground model through combining digital surface model data and digital height model data; thirdly, combining the building model and the ground model to form a three-dimensional scene model; fourthly, carrying out veining mapping superposition for the three-dimensional scene model and the actual projective image and generating the three-dimensional building scene. The method accelerates the modeling speed of the three-dimensional building scene and reduces the manual operation time of modeling mapping design. When the method is used, once modeling and vein mapping can be carried out to a specific zone; true and effective three-dimensional visual effect is obtained; and the working capacity of scene modeling is greatly lowered.
Owner:关鸿亮

Robot grinding system with three-dimensional vision and control method thereof

PendingCN109483369ASolve the problem of automatic grindingRealize flexible grinding workGrinding feed controlCharacter and pattern recognitionEngineering3d camera
The invention discloses a robot grinding system with three-dimensional vision and a control method thereof. The robot grinding system comprises a workpiece fixing device, a grinding robot, an opticaldevice, pneumatic grinding equipment, a power supply and a control device. The control method of the robot grinding system with the three-dimensional vision comprises the steps that the grinding robotdrives the optical device to horizontally scan a curved surface workpiece; a laser image collected by a 3D camera is transmitted to the control device in the power supply and the control device; thecontrol device performs the three-dimensional point cloud processing to the image obtained by the 3D camera by using a corresponding function, and the preliminary filtering denoising is carried out; the point cloud segmentation and a slice algorithm are used for performing the further processing on the obtained three-dimensional point cloud; and processing results are analyzed and sent to the control device according to rules, and the robot is driven to perform the wholly grinding operation on the curved surface workpiece in the correct attitude. The robot grinding system with three-dimensional vision and the control method thereof are mainly used for automatic grinding in the field of curved surface structure manufacturing, and the production efficiency and production quality are improvedby improving the automation degree of the industrial grinding field.
Owner:716TH RES INST OF CHINA SHIPBUILDING INDAL CORP +1

Structure optical parameter demarcating method based on one-dimensional target drone

The invention belongs to the technical field of measurement, and relates to an improvement to a calibration method of structured light parameters in 3D vision measurement of structured light. The invention provides a calibration method of the structured light parameters based on a one-dimensional target. After a sensor is arranged, a camera of the sensor takes a plurality of images of the one-dimensional target in free non-parallel motion; a vanishing point of a characteristic line on the target is obtained by one-dimensional projective transformation, and a direction vector of the characteristic line under a camera coordinate system is determined by the one-dimensional projective transformation and a camera projection center; camera ordinates of a reference point on the characteristic line is computed according to the length constraint among characteristic points and the direction constraint of the characteristic line to obtain an equation of the characteristic line under the camera coordinate system; the camera ordinates of a control point on a plurality of non-colinear optical strips are obtained by the projective transformation and the equation of the characteristic line, and then the control point is fitted to obtain parameters of the structured light. In the method, high-cost auxiliary adjustment equipment is unnecessary; the method has high calibration precision and simple process, and can meet the field calibration need for the 3D vision measurement of the large-sized structured light.
Owner:BEIHANG UNIV

Method For Displaying Recognition Result Obtained By Three-Dimensional Visual Sensor And Three-Dimensional Visual Sensor

Display suitable to an actual three-dimensional model or a recognition-target object is performed when stereoscopic display of a three-dimensional model is performed while correlated to an image used in three-dimensional recognition processing. After a position and a rotation angle of a workpiece are recognized through recognition processing using the three-dimensional model, coordinate transformation of the three-dimensional model is performed based on the recognition result, and a post-coordinate-transformation Z-coordinate is corrected according to an angle (elevation angle f) formed between a direction of a line of sight and an imaging surface. Then perspective transformation of the post-correction three-dimensional model into a coordinate system of a camera of a processing object is performed, and a height according to a pre-correction Z-coordinate at a corresponding point of the pre-coordinate-transformation three-dimensional model is set to each point of a produced projection image. Projection processing is performed from a specified direction of a line of sight to a point group that is three-dimensionally distributed by the processing, thereby producing a stereoscopic image of the three-dimensional model.
Owner:ORMON CORP

Method of and device for re-calibrating three-dimensional visual sensor in robot system

A re-calibration method and device for a three-dimensional visual sensor of a robot system, whereby the work load required for re-calibration is mitigated. While the visual sensor is normal, the visual sensor and a measurement target are arranged in one or more relative positional relations by a robot, and the target is measured to acquire position / orientation information of a dot pattern etc. by using calibration parameters then held. During re-calibration, each relative positional relation is approximately reproduced, and the target is again measured to acquire feature amount information or position / orientation of the dot pattern etc. on the image. Based on the feature amount data and the position information, the parameters relating to calibration of the visual sensor are updated. At least one of the visual sensor and the target, which are brought into the relative positional relation, is mounted on the robot arm. During the re-calibration, position information may be calculated using the held calibration parameters as well as the feature amount information obtained during normal operation of the visual sensor and that obtained during the re-calibration, and the calibration parameters may be updated based on the calculation results.
Owner:FANUC LTD

Helicopter rotor blade motion parameter measuring method based on binocular three-dimensional vision

A helicopter rotor blade motion parameter measuring method based on binocular three-dimensional vision includes the steps: (1) utilizing two binocular three-dimensional vision systems to construct a helicopter rotor blade motion image obtaining device; (2) utilizing a standard calibration template to achieve calibration of each binocular three-dimensional vision system; (3) sticking a certain number of calibrating points on each blade of a helicopter; (4) on the base that blade calibrating points in a binocular three-dimensional vision left image and a binocular three-dimensional vision right image are matched with the calibrating points, utilizing camera inner parameters and camera outer parameters, calibrated well, of the binocular three-dimensional vision systems to achieve measurement of three-dimensional information of the calibrating points in blades; (5) utilizing the three-dimensional information, obtained in the step (4), of the calibrating points in the blades, conducting calculation according waving, shimmying, and torsion definitions of blade motion parameters, and obtaining the blade motion parameters of helicopter rotor blades. The helicopter rotor blade motion parameter measuring method based on the binocular three-dimensional vision has the advantages of being simple in operation, free of contact type, small in danger coefficient, high in precision and the like.
Owner:NANCHANG HANGKONG UNIVERSITY

Method of and device for re-calibrating three-dimensional visual sensor in robot system

A re-calibration method and device for a three-dimensional visual sensor of a robot system, whereby the work load required for re-calibration is mitigated. While the visual sensor is normal, the visual sensor and a measurement target are arranged in one or more relative positional relations by a robot, and the target is measured to acquire position / orientation information of a dot pattern etc. by using calibration parameters then held. During re-calibration, each relative positional relation is approximately reproduced, and the target is again measured to acquire feature amount information or position / orientation of the dot pattern etc. on the image. Based on the feature amount data and the position information, the parameters relating to calibration of the visual sensor are updated. At least one of the visual sensor and the target, which are brought into the relative positional relation, is mounted on the robot arm. During the re-calibration, position information may be calculated using the held calibration parameters as well as the feature amount information obtained during normal operation of the visual sensor and that obtained during the re-calibration, and the calibration parameters may be updated based on the calculation results.
Owner:FANUC LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products