Flexible robot vision recognition and positioning system based on depth learning

A robot vision and deep learning technology, applied in the field of flexible robot visual recognition and positioning system, can solve the problems of poor validity and adaptability of artificial features, poor adaptability, and low accuracy of flexible robot identification and positioning.

Active Publication Date: 2017-05-24
CHONGQING UNIV OF TECH
View PDF4 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the face of mixed parts, random placement, viewing angle changes, lighting changes, background interference, and even image scaling or distortion, these methods cannot perform good part target recognition.
Moreover, there are big problems in the validity and adaptability of the above-mentioned artificial features, which require heuristic methods, very professional knowledge, largely rely on personal experience, and may be limited by specific conditions, and the adaptability is often poor
The poor validity and adaptability of artificial features of parts is one of the main reasons for the low accuracy of identification and positioning of existing flexible robots

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Flexible robot vision recognition and positioning system based on depth learning
  • Flexible robot vision recognition and positioning system based on depth learning
  • Flexible robot vision recognition and positioning system based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0058] The present invention will be further described in detail below in conjunction with the accompanying drawings.

[0059] During specific implementation: if figure 1 As shown, it is the process of the flexible robot visual recognition and positioning system based on deep learning in this embodiment. Taking the flexible robot visual recognition and positioning system to identify and locate a certain type of part on the transmission device as an example, the specific implementation process is as follows:

[0060] 1) Obtain the part image with a CCD camera, use a single threshold method to binarize the original image, and use the Roberts operator to extract the outer contour edge of the part image;

[0061] The system adopts Daheng DH-SV2001FM CCD camera, Computar M1614-MP2 industrial lens, EpVision EP-1394A dual-channel 1394A image acquisition card, and backlighting. Such as figure 2 Shown is the image of parts on the transmission device captured by a CCD camera. The par...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention discloses a flexible robot vision recognition and positioning system based on the depth learning. The system is implemented in the following steps: obtaining an image of a part, carrying out binarization processing on the image of the part to extract an outer contour of the image of the part; finding out a circumscribed rectangle of the outer contour edge in the lateral direction, determining to-be-recognized areas, and normalizing the areas to a standard image; gradually rotating the standard image at an equal angle, finding out a rotation angle alpha when the standard image is rotated to a minimum area of the circumscribed rectangle of the outer contour edge in the lateral direction; using a depth learning network to extract the outer contour edge when the rotation angle is alpha, and recognizing the part and the pose of the part; and according to the rotation angle alpha and the pose, calculating the actual pose of the to-be-recognized part before rotating, and transmitting the pose data to a flexible robot, so that the flexible robot can pick up the to-be-recognized part. According to the system disclosed by the present invention, contour shape features contained in the part image data are automatically extracted layer by layer by using the depth learning network, so that accuracy and adaptability of part recognition and positioning can be greatly improved under the complicated conditions.

Description

technical field [0001] The invention relates to the technical field of visual recognition and positioning in the robot industry, in particular to a flexible robot visual recognition and positioning system based on deep learning. Background technique [0002] In the automatic assembly, handling and sorting of parts by flexible industrial robots, how to accurately identify and locate randomly placed and mixed mechanical parts on the transmission device is a very important and very difficult task. Whether the characteristics of the parts are correctly selected , Whether it is effectively described or not has a significant or even decisive impact on the final recognition and positioning of the soft robot. Under complex conditions such as mixed parts, random placement, viewing angle changes, image scaling, distortion, lighting changes, background interference, etc., the recognition and positioning accuracy of existing flexible robots is not high. The main reason is that the featu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06T7/73G06T7/66
CPCG06T7/0004G06T2207/10004G06T2207/20081G06T2207/20084G06T2207/30164
Inventor 余永维杜柳青
Owner CHONGQING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products