Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Assembly image segmentation method and device based on deep learning and guided filtering

A guided filtering and deep learning technology, applied in the field of image processing, can solve problems such as mutual occlusion, reduced efficiency, and cost of manpower and material resources, and achieve the effect of improving data fitting ability, improving segmentation ability, and strengthening learning ability

Active Publication Date: 2021-08-17
QINGDAO TECHNOLOGICAL UNIVERSITY
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This will not only result in poor versatility of the algorithm and lower efficiency, but also consume a lot of manpower and material resources and increase costs
2. The semantic segmentation technology based on the deep learning method can automatically learn features without designing complex feature algorithms. However, in the field of mechanical assemblies, due to the lack of public data sets, the structure of mechanical products is complex, and mechanical assemblies contain A large number of small parts (such as thin shafts, bolts, etc.) and serious mutual occlusion caused problems such as poor segmentation of small parts and blurred edges of segmented images
The disadvantage of this scheme is that it does not consider that different objects have different scale spaces, resulting in poor segmentation performance for small-scale objects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Assembly image segmentation method and device based on deep learning and guided filtering
  • Assembly image segmentation method and device based on deep learning and guided filtering
  • Assembly image segmentation method and device based on deep learning and guided filtering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0034] see figure 1 , an assembly image segmentation method based on deep learning and guided filtering, including the following steps:

[0035] S1. Establish a data set including several assembly depth images.

[0036] S2, such as figure 2 As shown, construct a semantic segmentation model including feature extraction module, feature fusion module and filtering module:

[0037] Build a feature extraction module; the feature extraction module includes sequential connections: 3×3 convolutional layer→Relu activation layer→maximum pooling layer→3×3 convolutional layer→Relu activation layer→maximum pooling layer→3×3 convolution Layer→Relu Activation Layer→Maximum Pooling Layer→3×3 Convolutional Layer→Relu Activation Layer→Maximum Pooling Layer→3×3 Convolutional Layer→Relu Activation Layer→Maximum Pooling Layer→1×1 Convolutional Layer→ 1×1 convolutional layer → 1×1 convolutional layer. Among them, the 3×3 convolutional layer indicates that the convolutional layer uses a 3×3 con...

Embodiment 2

[0052] Further, see image 3 , the specific steps of generating the third feature map in Embodiment 1 are as follows;

[0053] The first feature map (that is, the first first feature map) output by the last 1×1 convolutional layer in the feature extraction module is upsampled by deconvolution (upsampling module), and then combined with the feature extraction The first feature map (that is, the second first feature map) output by the penultimate maximum pooling layer in the module is subjected to channel fusion, convolution operation and nonlinear conversion (channel fusion module → 3×3 convolutional layer → Relu activation layer → 3×3 convolutional layer → Relu activation layer) to get the first result;

[0054] Perform an upsampling operation on the first result, and then perform channel fusion, convolution operation and nonlinear conversion with the feature map output by the penultimate maximum pooling layer in the feature extraction module (that is, the third first feature...

Embodiment 3

[0057] The Sobel operator is used to extract the boundary features of the assembly image to be semantically segmented to obtain the boundary image; the assembly image to be segmented and the boundary image are input to the channel fusion module for channel fusion to obtain the guide image G. The guide image G is input to the 3x3 convolutional layer, the Relu activation layer, and the 1x1 convolutional layer to obtain the optimized guide image G'.

[0058] According to the optimized guide image G′, the first segmentation image I is linearly filtered to obtain the second segmentation image O, which is expressed as:

[0059]

[0060]

[0061] O=A h *I+b h

[0062] Wherein, i, k all represent the index value of the pixel in the image; I i Represent the value of the i-th pixel in the second segmentation map I; O i Represent the value of the i-th pixel in the second segmentation map O; are the coefficients of the local linear function; Indicates that the reconstruction...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an assembly image segmentation method based on deep learning and guided filtering. The method comprises the following steps: S1, establishing a data set comprising a plurality of assembly images; S2, constructing a semantic segmentation model, wherein the semantic segmentation model comprises a feature extraction module, a feature fusion module and a filtering module; S3, iteratively training the semantic segmentation model by using a data set; and S4, inputting a to-be-segmented assembly image into the trained semantic segmentation model to obtain a segmented image. According to the invention, the second feature map and the third feature map are fused to obtain the multi-scale feature map, the low-order features of the to-be-segmented assembly image are recovered, and the complexity of the semantic segmentation model is increased, so that the data fitting capability of the semantic segmentation model is improved, and the segmentation capability of the semantic segmentation model is improved. The guide filter module optimizes the segmentation edge of the segmentation image according to the guide image, and further strengthens the segmentation effect of each scale part in the assembly.

Description

technical field [0001] The invention relates to an assembly image segmentation method and equipment based on deep learning and guided filtering, and belongs to the field of image processing. Background technique [0002] At present, the manufacturing industry has ushered in the era of mass customization. The production mode with variable product types makes the assembly line of products constantly reorganized, which increases the difficulty of assembly for workers, and is prone to errors in assembly sequence, missed assembly, and wrong assembly. However, once these errors in the assembly process are not detected in time, it will directly affect the quality of the product and assembly efficiency, and will waste time and money in the subsequent assembly process. Segment the depth image of the mechanical assembly through semantic segmentation, analyze the segmented image, identify the assembled parts, and monitor missing assembly, wrong assembly, wrong assembly sequence, etc. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06K9/62G06N3/04G06N3/08
CPCG06T7/10G06N3/08G06T2207/20081G06T2207/20084G06T2207/20221G06T2207/30204G06N3/045G06F18/241G06F18/253Y02P90/30
Inventor 陈成军张春林李东年洪军
Owner QINGDAO TECHNOLOGICAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products