Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Item active pick-up method through mechanical arm based on deep and reinforced learning

A technology of reinforcement learning and manipulators, applied in manipulators, program-controlled manipulators, manufacturing tools, etc., can solve problems such as picking failures, easy output of wrong values, and inability to solve effectively

Active Publication Date: 2019-11-15
TSINGHUA UNIV
View PDF7 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Researchers from MIT and Princeton published a paper entitled Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching (through multi-confidence map capture and Cross-domain image matching enables robots to pick and place unfamiliar objects in stacking scenes). This technology is trained through deep learning and proposes the SuctionAffordances network (suction position confidence network), which inputs the color depth image of the picked scene and outputs pixel-level Suction Affordances Map (suction location confidence map), avoids complex item segmentation and identification, and directly obtains alternative pickup locations. However, in complex scenarios, the network is prone to output wrong values, resulting in pickup failure. Existing methods cannot effectively solve this problem. the problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Item active pick-up method through mechanical arm based on deep and reinforced learning
  • Item active pick-up method through mechanical arm based on deep and reinforced learning
  • Item active pick-up method through mechanical arm based on deep and reinforced learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0071] The present invention proposes a method for actively picking up objects with a robotic arm based on deep reinforcement learning. The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0072] The present invention proposes a method for actively picking up objects with a robotic arm based on deep reinforcement learning. The overall process is as follows: figure 1 As shown, it specifically includes the following steps:

[0073] 1) build the simulation environment that manipulator picks up, present embodiment adopts V-REP software (Virtual RobotExperimentation Platform, virtual robot experiment platform); Concrete steps are as follows:

[0074] 1-1) Import any manipulator model that can control the movement (the manipulator model can be different from the actual manipulator) in the V-REP software as the manipulator simulation. This embodiment uses UR5 (Universal Robots 5, Universal Robots 5)...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an item active pick-up method through a mechanical arm based on deep and reinforced learning, and belongs to the field of artificial intelligence and robotics applications. Themethod comprises the following steps of firstly establishing a simulation environment containing the machine arm and an item pick-up scene; establishing a deep Q learning network NQ based on multipleparallel U-Nets; carrying out multiple robot grabbing action policy tests in the simulation environment to perform training on the NQ, thus obtaining a trained deep learning network; establishing an item pick-up system in actual pick-up use, utilizing the trained deep learning network, inputting deep color images, and determining to utilize an action policy of actively changing a scene or pick upan item through the mechanical arm according to a measurement defined on a confidence image. According to the item active pick-up method disclosed by the invention, by actively changing an item pick-up environment through the mechanical arm and adapting to different pick-up conditions, a high success rate of pick-up can be achieved.

Description

technical field [0001] The invention belongs to the technical field of robot application, and in particular relates to a method for actively picking up items by a mechanical arm based on deep reinforcement learning. Background technique [0002] Robust and efficient item picking is one of the main research contents of robotics. With the rapid development of e-commerce, it is widely used in warehouse management, unmanned stores and industrial production lines. Most of the current item picking methods are based on passive methods, using cameras to capture static images of the current item stacking scene for item segmentation and classification and pose estimation. However, in practical applications, the item picking scene is complex, and it is difficult to perform accurate and efficient item segmentation. In classification and pose estimation, there are often phenomena that are not conducive to picking, such as mutual occlusion of items, and the approach of item poses to limit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): B25J9/16
CPCB25J9/163
Inventor 刘华平方斌韦毅轩邓宇鸿陆恺郭晓峰郭迪孙富春
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products