Unlock instant, AI-driven research and patent intelligence for your innovation.

Robot feeding system based on three-dimensional stereoscopic vision and point cloud deep learning

A three-dimensional, deep learning technology, applied in the field of robot feeding, can solve problems such as unpredictability and inability to be directly applied to industrial production applications, and achieve the effect of long cycle and low efficiency

Pending Publication Date: 2020-04-07
ZHEJIANG UNIV +1
View PDF7 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method can obtain a high grabbing success rate, but the workpieces and grabbing positions each time are random and unpredictable, and cannot be directly applied to industrial production applications

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot feeding system based on three-dimensional stereoscopic vision and point cloud deep learning
  • Robot feeding system based on three-dimensional stereoscopic vision and point cloud deep learning
  • Robot feeding system based on three-dimensional stereoscopic vision and point cloud deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

[0022] figure 1 A structural schematic diagram of a robot feeding system based on three-dimensional stereo vision and point cloud deep learning provided for an embodiment of the present invention, the robot feeding system at least includes a manipulator 3, a workpiece 2 within the working space of the manipulator 3, a The upper visual sensor 1 and the processing device 4, wherein the visual sensor 1 collects the three-dimensional point cloud data of the object in the field of view in real time, and transmits the three-dimensional point cloud data to the processing device...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a robot feeding system based on three-dimensional stereoscopic vision and point cloud deep learning. The robot feeding system at least comprises a manipulator, a workpiece, a visual sensor arranged above the workpiece and a processing device, wherein the visual sensor 1 collects three-dimensional point cloud data of an object in a visual field in real time and transmits thethree-dimensional point cloud data to the processing device; the processing device calculates pose information of a workpiece based on the received three-dimensional point cloud data and transmits the pose information to the manipulator; and the manipulator grabs the workpiece for feeding according to the received pose information. Different from a traditional pose estimation method depending onrecognition of picture pixel points or point cloud feature descriptors, for the robot feeding system, input information is three-dimensional point cloud information, collected by a visual sensor, of aworkpiece, and output information is estimated manipulator grabbing pose information, and the robot feeding system is a brand-new end-to-end robot feeding method based on deep learning.

Description

technical field [0001] The invention relates to a robot feeding method belonging to artificial intelligence, in particular to a robot feeding system based on three-dimensional stereo vision and point cloud deep learning. Background technique [0002] At present, in the processing and production of manufacturing enterprises, there are two common methods of robot loading and unloading: [0003] The first is a gripping solution based on custom-made workpiece pallets. The method is to stack the workpieces on the pallet in a strict and orderly manner, and then place the pallet in the work area of ​​the industrial robot. After manual teaching or offline programming, the robot is guided to the designated position for grabbing. The advantage is that the cost of equipment is relatively low, and the demand for installation sites is small. But its shortcomings are also obvious: 1) The time and economic cost of customizing the workpiece pallet is quite high; 2) The workpiece needs to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/70G06T7/10G06N3/08
CPCG06T7/70G06T7/10G06N3/08G06T2207/10012G06T2207/30108
Inventor 傅建中王郑拓徐月同俞炯炎顾天翼褚建农
Owner ZHEJIANG UNIV