Image processing-based automobile body paint film defect detection and identification method

A defect detection and identification method technology, applied in image data processing, image enhancement, image analysis and other directions, can solve the problem of inaccurate identification of car body paint film defects, and achieve the effect of reducing production cost, improving accuracy and improving quality

Inactive Publication Date: 2017-01-18
JILIN UNIV
6 Cites 3 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0007] The invention provides a method for detecting and identifying vehicle body paint film defects based on image ...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

[0165] The present invention acquires a large number of defect characteristic parameters by improving the image quality, realizes the detection of smaller paint film ...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention relates to a detection and identification method for automobile body paint film defects, in particular, an image processing-based automobile body paint film defect detection and identification method. The detection and identification method includes the following steps of: step one, image acquisition: a laser emitter is utilized to vertically irradiate the surface of a piece to be tested, so that an image can be obtained; step two, image pre-processing: after the automobile body paint film defect image is acquired, the image is pre-processed; step three, automobile body paint film defect characteristic parameter extraction: the geometrical characteristic, gray characteristic value and horizontal direction main projection characteristic value of an automobile body paint film are selected as defect characteristic parameters; step four, characteristic parameter dimensionality reduction; and step 5, paint film defect judgment based on a support vector machine. According to the detection and identification method of the invention, the quality of the image is improved, so that a large number of defect characteristic parameters can be obtained, and therefore, detection of small paint film defects can be realized, the accuracy of automobile body paint film detection can be improved, the quality of the automobile body paint film can be improved, and automobile production cost can be decreased.

Application Domain

Technology Topic

Image

  • Image processing-based automobile body paint film defect detection and identification method
  • Image processing-based automobile body paint film defect detection and identification method
  • Image processing-based automobile body paint film defect detection and identification method

Examples

  • Experimental program(1)

Example Embodiment

[0055] Refer to figure 1 , A method for detecting and recognizing car body paint film defects based on image processing, which includes the following steps:
[0056] Step 1: Obtain an image; use the laser transmitter 2 in the car body paint film defect detection system to illuminate the surface of the test piece 1 vertically to obtain an image;
[0057] Step 2: Image preprocessing: After collecting the car body paint film defect image, in order to improve the image quality, thereby improving the defect detection accuracy and efficiency, the image is preprocessed;
[0058] Step 3: Extract the defect characteristic parameters of the paint film of the car body; select the geometric characteristics, the gray characteristic value of the paint film of the car body, and the main projection characteristic value in the horizontal direction as the defect characteristic parameters;
[0059] Step 4: Reduce the dimension of feature parameters; use principal component analysis to reduce the dimension of feature parameters;
[0060] Step 5: Identify paint film defects by support vector machine; use FarutoUItimate3.0 toolbox developed by Professor Faruto to identify and classify paint film defects.
[0061] Referring to Figures 2-a and 2-b, the car body paint film defect detection system described in step one includes a laser transmitter 2, a camera 3, and an information processing terminal device 4; the laser transmitter 2 is placed on the DUT 1 1.5 meters obliquely forward, the camera 3 is placed 1.5 meters directly above the DUT 1; the laser transmitter 2 emits a line of laser light that irradiates the surface of the DUT 1 vertically, and the camera 3 is used to collect images, and then The image is transmitted to the information processing terminal device 4 for processing.
[0062] During inspection, camera 3 is located 1.5 meters directly above DUT 1. Camera 3 uses DALSA's P2-22-04K30 series camera and Nikon 35mm fixed focus camera lens; laser transmitter 2 is placed on DUT 1. At 1.5 meters obliquely in front, the model used is LSL-450-34-R red LED line light source.
[0063] The specific steps of step one are as follows:
[0064] Use the laser transmitter 2 to illuminate the surface of the test piece 1 vertically to obtain an image. The present invention takes six common defects as the experimental research objects, including: blistering, poor granule, sagging, pinholes, scratches and orange peel, to verify the detection method of the present invention. The invention collects 15 samples for each type of paint film defect type, and the total number of image samples is 90. Figure 3-a, Figure 3-b, Figure 3-c, Figure 3-d, Figure 3-e, Figure 3-f only lists five samples of each type of defect.
[0065] The specific steps of step two are as follows: After collecting the defect image of the paint film of the car body, in order to improve the image quality and thus the defect detection accuracy and efficiency, the image must first be preprocessed. The image preprocessing method adopted by the present invention mainly includes the following four steps:
[0066] 21), image cutting;
[0067] In order to improve the detection efficiency, it is first necessary to crop the image, delete unnecessary background information, and increase the calculation speed. The method is to use the gray threshold to select the effective area. The specific method is to use the gray value to search for the border around the image. The area within the line is the effective area.
[0068] Figure 4-a is the original image of the blistering defect. The pixel size is 144*143. The image obtained after image cropping using the threshold is shown in Figure 4-b. The pixel size is 83*76. The pixel size of the image is calculated. The number has been reduced by 14,284.
[0069] 22) Eliminate uneven illumination
[0070] In the detection of paint film defects, the environment is harsh, and the collected images are easily affected by light and cause image quality damage. The influence of the illumination here may be caused by the unevenness of the light source itself, or it may be caused by the reflection of light caused by the characteristics of the surface of the DUT 1 itself. When the paint film defect image is collected, it is affected by light reflection, which leads to bright light in some areas of the image, which will affect the extraction and detection accuracy of the paint film defect feature in the dark area.
[0071] The method of the present invention to deal with the problem of uneven illumination is to adopt top hat transformation and bottom hat transformation technologies. The former is mainly used to process the objects to be detected in the bright area, and the latter is mainly used to process the objects to be detected in the dark area.
[0072] The top hat transformation h of image f is defined as the difference between the open operation of image f and image f, which can be expressed as:
[0073] h=f-(fοs) (1)
[0074] The bottom hat transformation h'of image f is defined as the difference between the open operation of image f and image f, which can be expressed as:
[0075] h'=(fοs)-f (2)
[0076] Where s is a structural element.
[0077] 23) Image noise reduction
[0078] During the image acquisition process, both the noise in the environment and the noise generated in the hardware will affect the quality of the image, resulting in low detection accuracy. Therefore, the acquired image needs to be processed for noise reduction. The invention adopts an adaptive Wiener filter to perform noise reduction processing on the image.
[0079] 24), image contrast enhancement
[0080] The unevenness of the image itself or the shape and structure of the surface of the car body causes the quality of the collected image to be impaired. In order to eliminate the influence of this uneven illumination, the key to image enhancement is to enhance the important details in the image to obtain a good visual effect. The image details are crucial in the subsequent image analysis. The present invention adopts a morphology-based enhancement method to improve the image contrast and realize the image enhancement effect. The method is specifically as follows:
[0081] a. Use basic morphological operations to construct two filters to process the image to obtain bright area information and dark area information;
[0082] First use the morphological open operation and close operation to construct two image filters, respectively:
[0083]
[0084]
[0085] Among them, the image is represented by f, and the structural element is B.
[0086] b. Use image gray value and difference calculation to enhance image contrast;
[0087] Use the two image filters constructed by formula (3) and formula (4) to filter f to obtain the bright and dark areas of the image; when extracting the bright area H, consider the influence of low gray value pixels, set low Gray threshold T, the operation of extracting the bright area H can be expressed as:
[0088]
[0089]
[0090]
[0091] When extracting the dark area L, considering the influence of the higher gray value pixels, and setting the higher gray threshold T', the operation of extracting the dark area L can be expressed as:
[0092]
[0093]
[0094]
[0095] Finally, the gray-scale enhanced image I of the original image f is:
[0096] I=f+H-L
[0097] After the image is cut, the image enhancement method based on morphology proposed by the present invention is used to improve the image contrast, Figure 4-c Is to extract the dark scene of the original image, Figure 4-d Is to extract the bright scene of the original image, Figure 4-e It is the final result of improving the image contrast.
[0098] 25), image segmentation;
[0099] Image segmentation refers to the segmentation of different regions with special meaning in the image. The representative regions are disjoint, and each part satisfies a certain similarity criterion in features such as grayscale and texture. The image segmentation effect plays a very important role in the subsequent defect feature parameter extraction step. The image segmentation method based on graph theory may lead to over-segmentation of an image. In order to improve the segmentation result and obtain a better image segmentation effect, this application improves the original image segmentation method based on graph theory.
[0100] a) Construct a weighted undirected graph of the structure image;
[0101] For digital image I, the pixel size of digital image I is (r×s), if pixels p and p'are adjacent pixels, use formula (11) to define the pixel combination V of digital image I, expressed by formula (12) Adjacent pixel pair set E; Formula (13) is used to express a connecting line e between pixels p and p′, and formulas (11), (12), (13) are as follows:
[0102] V={p ij =(i,j)|1≤i≤r; 1≤j≤s} (11)
[0103] E={e={pi j; P i'j' }|pi j ,p i'j' ∈V) (12)
[0104] e={p;p'}∈E (13)
[0105] Use d for the difference between two pixels e ≥0 means, d e The larger is, the greater the degree of difference between two pixels is, and the difference set D of adjacent nodes in the weighted undirected graph can be expressed by formula (14);
[0106] D={d e |e∈E} (14)
[0107] Then the weighted undirected graph constructing the digital image I is:
[0108] N(I)=(G=(V,E),D) (15)
[0109] b), image segmentation is performed by the method of layer-by-layer segmentation;
[0110] The image segmentation method based on graph theory is expressed by formula (16);
[0111] S={S 0 ,S 1 ,...,S K } (16)
[0112] Among them: K represents the segmentation layer of the digital image; S 0 ={R 1 0 =V} is the initial area of ​​image segmentation;
[0113] Is the final area after image segmentation;
[0114] The image segmentation method based on graph theory uses layer-by-layer segmentation, and the segmentation result depends on the threshold {α 0 ,α 1 ,...,α K }, threshold and D = {d e The relationship of |e∈E} can be expressed by the following formula (17);
[0115] α Kmaxd e =α K-1...>α 0 =mind e (17)
[0116] The basic principle of image segmentation based on graph theory is: when d e ≥α t At this time, the pixel p and the pixel p'belong to different regions.
[0117] Use the image segmentation method based on graph theory to segment the characteristic image after threshold processing to obtain the characteristic area of ​​the defect such as Figure 4-f Shown.
[0118] The specific steps of step three are as follows:
[0119] When extracting feature information of defects, according to the general requirements of defect features, the present invention selects geometric features, gray feature values, and main projection feature values ​​in the horizontal direction (0 degrees) as defect feature parameters. There are six types of defects in the experiment. There are 15 samples of each defect, and 23 characteristic parameter values ​​are extracted for each defect image. The total amount of data is 2070. Due to the large amount of data, this application will not list all of them. Data, this application selects the characteristic parameters of an image of each defect, as shown in Table 1.
[0120] Table 1 Defect characteristics of paint film
[0121]
[0122]
[0123] The specific steps of step four are as follows:
[0124] Professor faruto of the MATLAB Technology Forum gave the corresponding auxiliary function plug-in in libsvm-mat-2.89-3[FarutoUltimate3.0], which is convenient for users to select the best parameters. Therefore, this article uses the FarutoUltimate3.0 toolbox developed by Professor faruto. Identification and classification of paint film defects. Proceed as follows:
[0125] (1) Prepare data
[0126] First, the support vector machine was trained with sample data, and 15 samples were used for each type of paint film defect of the car body. Among the training samples, 10 samples are selected, and the remaining 5 samples are used as test samples.
[0127] (2) Normalize the data
[0128] (3) Reduce the dimension of feature parameters
[0129] In defect detection, the large amount of extracted feature parameters will lead to the following four problems: multicollinearity will lead to instability of the solution space, which may lead to incoherent results; the sparsity of feature data, one-dimensional 68% of the values ​​of the normal distribution fall between the positive and negative standard deviations, while only 0.02% in the ten-dimensional space; too many variables will hinder the establishment of the search law; only analysis at the variable level may ignore the variables For example, several predictors may fall into a group that reflects only one aspect of the data. Therefore, it is necessary to reduce the dimension of the feature parameters. Feature parameter dimensionality reduction can not only overcome the above problems, but also has three advantages: 1. Reduce the number of predictors; 2. Ensure that these variables are independent of each other; 3. Provide a framework to explain the results. Feature dimensionality reduction is very necessary for machine learning tasks, and its main purpose is to improve computing efficiency while acquiring essential features.
[0130] The invention adopts the principal component analysis method to reduce the dimension of the characteristic parameters.
[0131] The purpose of principal component analysis is to maximize the inherent information of the original data after dimensionality reduction. The importance of each principal component value is determined by the variance of the data in the direction of the feature parameter projection.
[0132] The calculation process of the principal component analysis method is specifically divided into the following four steps.
[0133] 1. Feature centralization, that is, the average value of the feature data of each dimension is subtracted from the feature data of each dimension to obtain the B matrix. The "dimension" here refers to a feature (or attribute). The mean value has become 0;
[0134] 2. Calculate the covariance matrix C of the B matrix;
[0135] 3. Calculate the eigenvalues ​​and eigenvectors of the covariance matrix C, the eigenvectors obtained are the new feature data sets obtained, and the corresponding eigenvalues ​​are the contribution rate of each eigenvector in the sample data;
[0136] 4. Determine the cumulative contribution rate, and select the feature vector with a large contribution rate as the new defect feature data.
[0137] Suppose the sample data is X I =(X i1 ,X i2 ,...,X ip ), i=1,2,...,n, after using the principal component analysis method, the eigenvalue of the covariance matrix C obtained is λ 1 ≥λ 2 ≥...≥λ P ≥0, the corresponding eigenvector of unit orthogonalization is..., Then the principal component of the i-th sample can be expressed by equation (17), and the contribution rate of the corresponding principal component can be expressed by equation (18).
[0138] Y I = ∂ i 1 X 1 + ∂ i 2 X 2 + ... + ∂ i p X p , i = 1 , 2 , ... , p - - - ( 17 )
[0139] Conλ i = λ i X i = 1 p λ i - - - ( 18 )
[0140] Arrange the p principal components according to the order of contribution rate, and the cumulative contribution rate of the principal components of the first m (m
[0141] Conλ 1 - m = X i = 1 m λ i X i = 1 p λ i - - - ( 19 )
[0142] Taking the blistering defect as an example, the characteristic parameters of the blistering defect extracted in this paper are represented by matrix A.
[0143] A = 0.9702 0.9650 1.2820 0.8755 0.5040 2.3410 0.7645 - 0.0460 1.8260 0.2968 - 1.2360 10.1640 0.0686 0.0190 0.5990 0.0336 5.9870 0.0370 0 0.0110 - 0.0890 0 - 0.3770 - 0.7890 - - - ( 20 )
[0144] Using the principal component analysis method, the eigenvalue D of the covariance matrix corresponding to the matrix A, the principal component value pc of the characteristic parameter of the blistering defect and the cumulative contribution rate ans of the principal component value can be obtained.
[0145] D = 0.1712 0 0 0 3.8523 0 0 0 13.2528 - - - ( 21 )
[0146] p c = 0.0240 - 0.0113 0.9996 - 0.3376 0.9411 0.0187 0.9410 0.3379 - 0.0187 - - - ( twenty two )
[0147] a n s = 0.7671 0.9901 1.0000 - - - ( twenty three )
[0148] If the cumulative contribution rate is determined to be 99%, the selected principal component values ​​are 1.12240, -0.0113, 0.9996, -0.3376, 0.9411, 0.0187, that is, the original 24 statistical feature parameters are replaced with 6 principal component values, and the feature parameters are reduced 24/6=4 times, and the defect feature information retention rate is 99%, which shows that the calculation efficiency is effectively improved.
[0149] In the experiment, the cumulative contribution rate is 90%. Table 2 shows the principal component values ​​of the calculated defects. Taking the blistering defect as an example, it can be seen that the PCA method uses 6 principal components to replace the original 23 statistical characteristic parameters. In this way, the dimensionality reduction rate is 23/6=3.8.
[0150] Table 2 Principal component values ​​of each defect feature
[0151]
[0152]
[0153] (4) Training data, generating model
[0154] The present invention uses a support vector machine to generate a model.
[0155] The support vector machine has two parameters to be determined, namely the penalty parameter c and the kernel function parameter g. The method to determine the optimal value of these two parameters adopts genetic algorithm, particle swarm algorithm and grid division method. The parameter optimization results are as follows As shown in Figure 5-a, Figure 5-b, and Figure 5-c, the optimal penalty parameter c selected by the comparison results is 0.0039063, and the optimal kernel function parameter g is 0.0039063.
[0156] After obtaining the principal component values ​​of the defect features of 10 samples, they are input into the support vector machine, and the defect feature parameters are matched with the defect types, that is, the recognition and classification model is generated.
[0157] (5) Test
[0158] Use the remaining 5 samples to test the generated recognition and classification model, and observe whether it can achieve 100% accuracy. Use the remaining samples to test the generated recognition and classification model to observe whether it can accurately match the defect type.
[0159] Image 6 It is the classification result map after testing the support vector machine. It can be seen that the remaining 5 samples are used as test samples, and the accuracy rate is 100%, indicating that the support vector machine for car body paint film defect detection has been trained and can be used for detection Car body paint film defects.
[0160] (6) Identification and classification of paint film defects
[0161] In the experiment, 30 samples of each type of defect were selected, and the original image was processed using the image processing technology proposed in Chapter 3 to obtain feature parameters, and then these parameters were input to the trained support vector machine to identify and classify paint film defects. The test results are shown in Table 3.
[0162] Table 3 Experimental results
[0163]
[0164]
[0165] The invention obtains a large number of defect characteristic parameters by improving the image quality, realizes the detection of smaller paint film defects, improves the accuracy of the car body paint film detection, thereby improves the car body paint film quality and reduces the automobile production cost.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Website automated testing method and automated testing system

ActiveCN103873318AImprove recognition efficiencyImprove automated testing efficiencyData switching networksData miningMiddleware
Owner:ALIBABA GRP HLDG LTD

Classification and recommendation of technical efficacy words

  • High precision
  • Improve recognition efficiency
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products