Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

113 results about "Robot vision systems" patented technology

Simulation method and device of digital twin system of industrial robot

The invention provides a simulation method and device of a digital twin system of an industrial robot. The simulation device comprises the industrial robot, a vision perception unit and a computer, wherein the vision perception unit is arranged at the tail end of the robot and is composed of a camera and a wire structure light emitter; an industrial robot simulation system is established, the robot is modeled, and three instructions are analyzed and run; a target three-dimensional measurement model of a robot visual system is acquired by using a wire structure optical geometric triangular method; a motion instruction is determined for the working task of the robot, virtual simulation is carried out through the computer, and reachable points and collision point detection are carried out; and the robot is actually used for identifying a target object through the visual perception unit, and the actual movement of the robot is driven. The simulation device is safe and reliable in virtual simulation, abnormal operation possibly in actual operation is avoided, the visual perception unit is used for identifying the target object through three-dimensional visual technology, the self-adaptive capacity of the robot to the field environment is improved, and flexibility and intelligence are enhanced.
Owner:XI AN JIAOTONG UNIV

Three-dimensional point cloud-based patrol robot vision system and control method

The invention relates to a three-dimensional point cloud-based patrol robot vision system and a control method. According to the system, point cloud data of a patrol environment is collected by adopting an RGBD video camera, and a three-dimensional map of the patrol environment is created based on a point cloud fusion technology; obstacle avoidance and optimal path planning are performed based onan artificial potential field method; based on a convolutional neural network identification algorithm, three-dimensional features of objects are fused, a target object in the patrol environment is identified, and according to a mapping relationship between the target object and the camera, three-dimensional coordinates of the target object are accurately located; based on a wireless network system, real-time data obtained by a patrol robot is transmitted to a control terminal quickly in real time; and an operator can monitor or play back a patrol condition through the control terminal in realtime, and can control the robot to execute a patrol task through the terminal. According to the control method of the system, a working environment of the patrol robot is not influenced by change ofambient light, and the patrol task can be finished in a dark light condition.
Owner:YANSHAN UNIV

Transportation device based on robot vision system and independent tracking system

The invention discloses a transportation device based on a robot vision system and an independent tracking system. The transportation device comprises a transportation vehicle body, wherein a control system, an initial target information storage system, a vision recognition system, a decision control system and an independent tracking system are arranged on the transportation vehicle body; the initial target information storage system, the vision recognition system, the decision control system and the independent tracking system are respectively connected with the control system; the initial target information storage system is used for storing the human body feature information of an initial target and setting a luggage taking code or a sensing signal; the vision recognition system is used for processing the extracted image information; the decision control system is used for tracking and judging the initial target, optimizing the path, generating a control instruction and issuing the control instruction to the independent tracking system; the independent tracking system is used for performing differential speed regulation on wheels of the transportation vehicle body and driving the transportation vehicle body to implement tracking on a target. The transportation device has the advantages that the vision recognition is used for locking the position of a target object and the self body, the decision control is realized through a genetic PID (proportion integration differentiation) algorithm, a moving mechanism is executed to complete the tracking on the target object, and the goal of closely following passengers is achieved.
Owner:CHONGQING UNIV

Vision system based on active panoramic vision sensor for robot

The invention discloses a vision system based on an active panoramic vision sensor for a robot. The vision system comprises an omnibearing vision sensor, a key surface laser light source, and a microprocessor used for carrying out three-dimensional stereoscopic image pick-up measurement, obstacle detection, obstacle avoidance and navigation on an omnibearing image; the omnibearing vision sensor and the key surface laser light source are configured on the same axial lead; the microprocessor internally comprises a video image reading module, an omnibearing video sensor calibrating module, a Bird-View converting module, an omnibearing surface laser information reading module, an obstacle characteristic point calculating module, an obstacle space distribution estimating module between key surfaces, an obstacle contour line generating module and a storing unit; spatial data points of the key surface, scanned by laser and corresponding pixel points in the omnibearing image are subjected to data fusion, so that the spatial point has geometric information and color information at the same time; a quasi-three-dimensional omnibearing map in an unknown environment is finally established; the resource consumption of a computer can be reduced; the measurement is quickly completed; and the obstacle avoidance and the navigation of the robot are convenient.
Owner:ZHEJIANG UNIV OF TECH

Hand-eye calibration parameter identification method, hand-eye calibration parameter identification system based on differential evolution algorithm and medium

The invention provides a hand-eye calibration parameter identification method, a hand-eye calibration parameter identification system based on a differential evolution algorithm and a medium. The hand-eye calibration parameter identification method comprises the following steps of moving a robot end of a robot vision system to different poses to collect robot joint data and camera image data; separately calculating a pose matrix of the robot end relative to a robot base coordinate system and a pose matrix of a calibration board relative to a camera coordinate system; defining a rotary component calibrated error function and a translational component calibrated error function; determining and solving a multi-target optimization function of a hand-eye calibration problem; and separately calculating calibrated errors of a rotating part and a translational part of the robot vision system and verifying and identifying the optimum hand-eye calibration parameter. The global optimality of a calibration result can be acquired, the acquired calibration result falls onto a special Euclidean group SE (3), and additionally introduced calculation of calibrating orthogonalization of the acquiredrotating matrix.
Owner:SHANGHAI JIAO TONG UNIV

Visual-assembly production line of motor rotor and assembly process

ActiveCN105656260ARealize real-time deviation correctionAvoid problems such as low work efficiencyManufacturing stator/rotor bodiesLocation detectionProduction line
The invention relates to a visual-assembly production line of a motor rotor and an assembly process. The visual-assembly production line is characterized by comprising shaft-sleeve automatic discharging equipment, end-plate automatic discharging equipment, a single-arm hydraulic machine, magnetic-steel embedding equipment, a rotor iron core pallet, a rotary-shaft pallet, a pressed-finished-product pallet, a robot, a PLC (Programmable Logic Controller), an industrial camera, a ring-shaped light source and an industrial flat-panel computer, wherein the robot, the PLC, the single-arm hydraulic machine, the shaft-sleeve automatic discharging equipment, the rotor iron core pallet, the rotary-shaft pallet, the pressed-finished-product pallet, the end-plate automatic discharging equipment and the magnetic-steel embedding equipment form a robot execution system; the industrial camera, the ring-shaped light source and the industrial flat-panel computer form a visual system of the robot; the industrial flat-panel computer is provided with visual software for recognizing the position of a workpiece; the ring-shaped light source is fixed on the industrial camera; and the industrial camera and the ring-shaped light source are coaxially arranged on a mechanical arm of the robot and are vertical to the surface of the workpiece when carrying out detection on the position of the workpiece.
Owner:HEBEI UNIV OF TECH +1

Bionic zoom lens and driving device thereof

The invention discloses a bionic zoom lens and a driving device thereof. A double gluing refraction object lens and a colloid lens are used as refraction devices. Light rays are refracted in advance by simulating zooming characters of a cornea and a crystalline lens of a human eye and using the double gluing refraction object lens as a first lens unit of the zoom lens, and the colloid lens is used as a second lens unit, and accordingly the crystalline lens of the human eye is simulated. Continuous zooming in a zooming range required in design is achieved by respectively installing an object lens fixing frame and a fixing sleeve at two ends of a voice coil motor, installing the double gluing refraction object lens in the object lens fixing frame, connecting a press ring and an inner ring of the voice coil motor together through a thread, enabling the rear end face of the press ring to contact with the front surface of the colloid lens, connecting the fixing sleeve and a housing of the colloid lens together, and using the inner ring of the voice coil motor to drive the press ring to squeeze the front surface of the colloid lens so as to change surface curvature of the colloid lens. The bionic zoom lens and the driving device thereof have the advantages of being stable in optical axis, good in imaging quality, rapid in response and the like, and can be widely used in robot vision systems and a variety of modern imaging systems.
Owner:ZHEJIANG UNIV

Camera pose calibration method based on spatial point location information

The invention discloses a camera pose calibration method based on spatial point location information. For a robot vision system with an independently-installed camera, a sphere is placed at the tail end of a robot to serve as a calibration object, then the robot is operated to change the position and posture of the sphere to move to different point locations, images and point clouds of table tennis balls at the tail end of the robot are collected; the sphere center is fitted to serve as a spatial point, and meanwhile the corresponding position and posture of the robot are recorded; then a transformation relation between a camera coordinate system and a robot base coordinate system is calculated by searching an equation relation between specific point position changes; the points collectedunder a camera coordinate system are converted into points under a robot base coordinate system, and directly achieving target grabbing of the robot based on visual guidance. According to the invention, the sphere is used as a calibration object, the operation is simple, flexible and portable, the tedious calibration process is simplified, and compared with a method for conversion by means of a calibration plate or a calibration intermediate coordinate system, the method has higher precision, and does not introduce an intermediate transformation relationship and excessive error factors.
Owner:EUCLID LABS NANJING CORP LTD

Weeding robot with both functions of targeted quantitative spraying and mechanical fixed point shoveling

The invention relates to a weeding robot with both functions of targeted quantitative spraying and mechanical fixed point shoveling. The weeding robot consists of a double-hub electric driving mechanical platform, a robot vision system, an intelligent control system, an automatic steering system, a horizontal maintaining mechanism, a foldable weeding platform, an alternative targeted quantitative spraying system, an electric assisting system and a feedback system. After the weeding robot is powered on, the robot vision technique is adopted to track orientation of in-field seedling zones, and automatic intelligent navigation can be achieved. Meanwhile the foldable weeding platform is unfolded, in-field data are acquired through five sets of image acquisition devices in real time, weed information is analyzed and integrated a pulse instructions, an electromagnetic valve is controlled to be opened and closed to drive a stepper motor on the foldable weeding platform to drive double nozzles to rotate so as to achieve space fixed point quantitative targeting spraying. The extension length of a push rod motor is adjusted through feedback of the sensor so as to adjust the horizontal holding mechanism, then a weeding mechanism is normally kept in equal distance from the ground horizon, and accurate weeding can be achieved through the weeding mechanism. The weeding robot can be applied to agricultural weeding operation.
Owner:NORTHEAST AGRICULTURAL UNIVERSITY

Online grinding method of weld joint grinding and polishing robot

The invention discloses an online grinding method of a weld joint grinding and polishing robot. The online grinding method comprises the following steps: 1, the relation among coordinates of a visualmeasurement unit, coordinates of the robot and coordinates of a grinding wheel is calibrated; 2, communication among the robot, a visual system and a PC is established; 3, a benchmark and a contour benchmark of the visual system are established; 4, a robot movement starting point is created, and the grinding wheel is moved to the structural part weld joint grinding and polishing starting point; 5,a frequency converter adjusts the rotating speed of the grinding wheel, and the robot slowly gets close to the weld joint starting point; 6, the visual measurement unit recognizes the contour of theweld joint, processes the point cloud data and obtains a weld joint characteristic value; 7, the robot sends a signal of the current position, and the visual system feeds back feature point data extracted at the moment to the robot; 8, the robot is controlled to track the contour of the weld joint in real time, continuously sends a signal to the visual system, and the step 7 and the step 8 are repeated; and 9, the robot completes grinding and polishing operation of the weld joint and returns to the Home point.
Owner:HUNAN UNIV OF SCI & TECH

Machine-vision-based intelligent cutter lifting system of shearing machine and realization method thereof

The invention discloses a machine-vision- based intelligent cutter lifting system of a shearing machine and a realization method thereof. The machine-vision-based intelligent cutter lifting system ofa shearing machine comprises a touch screen, a PLC controller and a machine vision cloth seam detection system, wherein the machine vision cloth seam detection system is arranged on a rack of the shearing machine and rightly faces to right ahead and an oblique forward direction of a fabric processed by the shearing machine. The machine vision cloth seam detection system comprises a vision cross beam, an image forming system and a control system. The image forming system comprises a light source device, a camera and a lens. The light source is arranged on the crossbeam. The crossbeam is arranged on the rack of the shearing machine, above a front guide roller and directly facing the front end of a detected fabric. The machine-vision-based intelligent cutter lifting system of the shearing machine is a non-contact detection system, has no impact on a fabric processing style, can guarantee the machine vision system can reliably detect the cloth seam and the hole through the robot vision system, and can make a hair brushing cutter or a shearing cutter lift automatically through the control system to let cloth seam to operate. The machine-vision-based intelligent cutter lifting system ofshearing machine is simple in structure, high in control accuracy, low in fault rate, and can save labor, can increase efficiency and can improve product quality.
Owner:HAINING TEXTILE MACHINERY FACTORY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products