Assembly robot part deep learning recognition method

A deep learning and recognition method technology, applied in the field of parts recognition, can solve the problems of lack of robustness, easy to miss detection and false detection, and achieve the effect of high recognition accuracy and good detection effect.

Active Publication Date: 2019-08-16
CHONGQING UNIV OF TECH
View PDF7 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although YOLOv3 has improved the detection effect of small targets on the basis of YOLOv2, it still lacks robustness in the face of some small artifacts with inconspicuou...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Assembly robot part deep learning recognition method
  • Assembly robot part deep learning recognition method
  • Assembly robot part deep learning recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention will be described in further detail below in conjunction with embodiment.

[0035] This embodiment is based on the YOLOv3 algorithm, and optimizes and improves the feature extraction network structure for part detection in the machine vision system. The parts have a good detection effect. The whole identification process is as follows figure 1 shown.

[0036] 1. The principle of real-time part recognition based on YOLOv3

[0037] YOLOv3 designed a new basic classification network Darknet-53 based on the Darknet-19 network structure in ResNet and YOLOv2, which contains a total of 53 convolutional layers. Only 1x1 and 3x3 small convolution kernels are used in the network, and more filters are generated while reducing parameters, so as to obtain a more distinguishable mapping function and reduce the possibility of overfitting; use a step size of 2 The convolution kernel replaces the pooling layer for dimensionality reduction operations to maintain...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an assembly robot part deep learning recognition method which comprises the following steps: firstly, obtaining an image of a to-be-recognized workpiece by using an industrialcamera, then, recognizing the image by using a YOLOv3 network, and outputting part category and position information; wherein the YOLOv3 network comprises five residual network blocks, and is characterized in that a CFENet module is introduced after each residual network block, and the CFENet module is integrated into the Darknet-53 feature extraction network for image feature extraction.. The method has the advantages that the workpiece under the normal pose can be recognized, the good detection effect is achieved on the parts under the complex conditions of camera overexposure, workpiece mutual shielding and the like, and the recognition accuracy is high.

Description

technical field [0001] The invention relates to the technical field of part recognition, in particular to a deep learning recognition method for parts of an assembly robot. Background technique [0002] The identification and positioning of workpieces is an important part of machine vision. In recent years, with the wide application of machine vision in industrial automation, higher requirements have been put forward for the recognition accuracy and positioning accuracy. Traditional machine vision target detection methods are based on artificially designed feature extractors, and feature classifiers are obtained through HARRIS corner detection, SURF algorithm, directional gradient histogram or edge pixel transitions to achieve the purpose of workpiece detection. The feature classifier designed by hand has low robustness and cannot adapt to the situation where the target artifacts have large changes such as mutual stacking. [0003] The deep convolutional neural network can ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/044G06N3/045G06F18/23213G06F18/253Y02P90/30
Inventor 余永维彭西杜柳青
Owner CHONGQING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products