Automatic sorting and boxing equipment of component assembly line based on vision

A technology for automatic sorting and parts, applied in sorting and other directions, can solve the problem that products cannot be automatically sorted and packaged according to the model, and achieve the effect of fast calculation speed, meeting factory needs, and high calculation accuracy

Pending Publication Date: 2018-02-09
HEFEI INSTITUTES OF PHYSICAL SCIENCE - CHINESE ACAD OF SCI
0 Cites 19 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0003] In order to solve the defect of the existing technology that products cannot be automatically sorted and packaged according to the model in the process of factory assembly line operation, the present invention provides a vision-based automatic sorting and packing equipment for parts assembly line, which uses industrial cameras and LED light sources to analyze parts images Information is collected, and image processing algor...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention discloses a vision-based automatic sorting and boxing equipment for parts assembly line, which belongs to the technical field of robot visual recognition and positioning. The use process of this equipment is divided into offline teaching and online identification, positioning and sorting. Offline teaching manually places parts into the field of view of the camera, the camera collects images and extracts features, and manually controls the robotic arm to pick up parts. The purpose is to Input parts information; online identification, positioning and sorting through real-time collection of parts image information on the transmission line, and extracting its features through the industrial computer, and comparing with the features extracted offline, to realize the identification of the current part model, and Calculate the amount of relative translation and rotation, and send the amount of translation and rotation to the robot controller to control the robotic arm to pick up and pack parts. The device can identify and locate any conveyor belt and parts placed in any position and posture. It is easy to add and calibrate new parts, improving its practicability and scalability.

Application Domain

Technology Topic

RoboticsTime information +16

Image

  • Automatic sorting and boxing equipment of component assembly line based on vision
  • Automatic sorting and boxing equipment of component assembly line based on vision
  • Automatic sorting and boxing equipment of component assembly line based on vision

Examples

  • Experimental program(1)

Example Embodiment

[0019] The present invention will be further described below in conjunction with the drawings and embodiments.
[0020] This embodiment discloses a vision-based automatic sorting and packing equipment for parts assembly line, which is a non-contact image processing method based on vision, such as figure 1 As shown, it includes randomly placed parts on the conveyor belt 2, the conveyor belt 2, the proximity switch 3, the area scan camera 4, the ring LED light source 5, the industrial computer for image processing 6, the robot controller 7, and six axis The robot arm 8, the suction cup 9 and the material box 10 used for packaging different models, the area scan camera 4 and the LED light source 5 are installed above the conveyor belt 2, and the proximity switch 3 is located within the camera's field of view.
[0021] Such as figure 2 As shown, the specific operation process is: First, when a new type of part 1 is added to the conveyor belt 2, the part 1 needs to be placed offline in the camera's field of view, and the area scan camera 4 is triggered to collect zero by controlling the LED light source 5 to light up. The image of component 1 is processed by the industrial computer 6 to extract the edge contour coordinates, and its polar coordinate vector and dimensionality reduction feature vector are calculated, and the polar coordinate vector and dimensionality reduction feature vector of the new model component 1 are stored in The database is used for online recognition and positioning, and then teach the robot to pick up the parts 1 to the designated bin 10. After adding the new model part 1 offline, during the online work process, a proximity switch 3 needs to be installed on the conveyor belt 2 to detect whether part 1 passes by. When the proximity switch 3 detects the part 1 and controls the LED light source 5 to turn on To trigger the area scan camera 4 to collect images, use the same image processing method to obtain the contour polar coordinate vector and the dimension reduction feature vector of the current conveyor belt 2, and then compare it with the polar coordinate vector and the dimension reduction feature vector in the database to determine The current part 1 model, and its translation and rotation relative to the offline teaching position, and then send the translation and rotation to the robot controller 7, and control the suction cup 9 of the end joint of the robot arm 8 to pick up the part 1 and place it on the specified material Box 10.
[0022] image 3 It is the comparison result of the polar coordinate vector of the current component edge contour and the polar coordinate vector of the teaching component.
[0023] Figure 4 It is the distribution of the polar coordinate vectors of the edge contours of the 3 types of products in the application example after being reduced to three dimensions. The calculation method of component contour polar coordinate vector dimensionality reduction is as follows:
[0024] Let (x, y) represent the two-dimensional coordinates of the image, (Cx, Cy) represent the center coordinates of component 1 obtained by image processing, and take (Cx, Cy) as the coordinate center, and set the contour coordinates of component 1 (x i ,y i ) Converted to polar coordinates (θ i,i r), where i=1,...,N, N represents the number of contour points extracted. The array {(θ i ,r i ),i=1,...,N} perform linear interpolation to obtain the r value at the fixed θ array to form the polar coordinate vector R.
[0025] Place a single component 1 at any position and angle within the camera's field of view, collect ten groups of images and process them to obtain a vector group R i ,i=1,...,10, calculate the vector group R through the kernel function k(x,y) i ,i=1,...,10 kernel matrix K, calculate the eigenvalue of matrix K λ=[λ 1 ,...,Λ 10 ], take the eigenvector corresponding to the first three largest eigenvalues ​​as the base vector.
[0026] Through the present invention, the relative movement position and rotation angle of the online component 1 and the teaching component 1 can be obtained, so as to control the robot to accurately suck the specified position of the component.
[0027] With regard to the limitation of the protection scope of the present invention, those skilled in the art should understand that on the basis of the technical solutions of the present invention, various modifications or variations that can be made by those skilled in the art without creative work are still within the protection scope of the present invention. Within.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

  • Easy installation
  • Simple method

Detecting irises and pupils in images of humans

ActiveUS20060098867A1Good efficient and moderate computing resourceSimple methodImage enhancementImage analysisEye detectionPupil
Owner:MONUMENT PEAK VENTURES LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products