Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Improved physical object handling based on deep learning

a deep learning and physical object technology, applied in image enhancement, image analysis, program-controlled manipulators, etc., can solve the problems of not disclosing the details of training and using convolutional neural networks, not disclosing the handling of 3d objects,

Pending Publication Date: 2022-09-22
ROBOVISION
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The invention is a method for controlling a robot using a 3D network trained through deep learning. The main advantage of this method is that it provides accurate and robust control of the robot. Additionally, the trained network is a semantic segmentation NN, which uses a 3D PointNet++ that combines efficiency and robustness by considering neighboring points at multiple scales. This helps to improve the overall performance of the robot control system.

Problems solved by technology

In particular, a problem with known methods may be to take into account the structure of the object, including the 3D surface, for which the handling may depend critically on the handling coordinate and the orientation of the 3D object.
However, US20190087976A1 does not disclose details of training and using the convolutional neural networks.
However, EP3480730A1 does not disclose handling of 3D objects.
However, also WO2019002631A1 does not disclose handling of 3D objects.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Improved physical object handling based on deep learning
  • Improved physical object handling based on deep learning
  • Improved physical object handling based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

example 1

mbodiments According to the Invention

[0143]FIG. 1 illustrates example embodiments of a method according to the invention. It relates to a method for generating a robot command (2) for handling a three-dimensional, 3D, physical object (1) present within a reference volume and comprising a 3D surface. It comprises the step of, based on a PLC trigger, obtaining (11) at least two images (30) of said physical object (1) from a plurality of cameras (3) positioned at different respective angles with respect to said object (1).

[0144]Each of the images is subject to a threshold (12), which may preferably be an application-specific pre-determined threshold, to convert them into black and white, which may be fed as a black and white foreground mask to the next step, either replacing the original images or in addition to the original images.

[0145]The next step comprises generating (15), with respect to the 3D surface of said object (1), a voxel representation segmented based on said at least tw...

example 2

mbodiments with 2D CNN According to the Invention

[0151]FIGS. 2-8 illustrate steps of example embodiments of a method according to the invention, wherein the NN is a CNN, particularly a 2D U-net.

[0152]FIG. 2 provides an overview of example embodiments of a method according to the invention. In this example, the object (2) is a rose present in the reference volume that is cut in cuttings such that the cuttings may be picked up and planted in a next process step. To this end, the robot element is a robot cutter head (4) that approaches the object and cuts the stem, according to the robot command, at appropriate positions such that cuttings with at least one leaf are created. Particularly, the robot command relates to a robot pose, that comprises a starting and / or ending position, i.e. a set of three coordinates, e.g., x, y, z, within the reference volume, and, if one of the starting and ending positions is not included, an approaching angle, i.e. a set of three angles, e.g., alpha, bet...

example 3

UI with 2D Annotation According to the Invention

[0160]FIG. 9 illustrates example embodiments of a GUI with 2D annotation. The GUI (90) may be used for training of any NN, preferably a 2D U-net or a 3D PointNet++ or a 3D DGCNN, such as the CNNs of Example 2. The GUI operates on a training set relating to a plurality of training objects (9), in this example a training set with images of several hundred roses, with eight images for each rose taken by eight cameras from eight different angles. Each of the training objects (9) comprises a 3D surface similar to the 3D surface of the object for which the NN is trained, i.e. another rose.

[0161]However it should be noted that the NN, when trained for a rose, may also be used for plants with a structure similar to that of a rose, even if the training set did not comprise any training objects other than roses.

[0162]The GUI comprises at least one image view (112) and allows to receive manual annotations (91, 92, 93) with respect to a plurality ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a method for generating a robot command for handling a three-dimensional, 3D, physical object present within a reference volume and comprising a 3D surface, comprising: obtaining at least two images of said physical object from a plurality of cameras positioned at different respective angles with respect to said object; generating, with respect to the 3D surface of said object, a voxel representation segmented based on said at least two images; and computing the robot command for said handling of said object based on said segmented voxel representation.

Description

FIELD OF THE INVENTION[0001]The present invention relates to handling of 3D physical objects by means of robots based on deep learning.BACKGROUND ART[0002]In image analysis of 3D objects in the context of robot automation, visualization and 3D image reconstruction is fundamental for enabling accurate handling of physical objects. Image data may be a mere set of 2D images, requiring extensive processing in order to generate appropriate robot commands that take into account the features of the object as well as the requirements of the application.[0003]In particular, a problem with known methods may be to take into account the structure of the object, including the 3D surface, for which the handling may depend critically on the handling coordinate and the orientation of the 3D object.[0004]US20190087976A1 discloses an information processing device includes a camera and a processing circuit. The camera takes first distance images of an object for a plurality of angles. The processing c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): B25J9/16
CPCB25J9/163B25J9/1697B25J9/161B25J9/1666B25J9/1653G06T2207/20084G06T7/55G06T2207/20081G06T2207/20104G06T7/11G06V10/7747G06V10/82G06T2207/10028
Inventor VAN PARYS, RUBENWAGNER, ANDREWVERSTRAETE, MATTHIASWAEGEMAN, TIM
Owner ROBOVISION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products