Crack identification method based on deep learning

A technology of crack identification and deep learning, applied in the field of crack identification based on deep learning, can solve the problems of poor recognition efficiency, accuracy and adaptability, and achieve the effect of solving the shortage of training set data samples and improving recognition efficiency

Pending Publication Date: 2021-12-21
CENT SOUTH UNIV
6 Cites 3 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0006] In view of this, the present application provides a crack identification method based on deep learning, which...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

For example, after repeating step 2.1 ~ 2.4 altogether 4 times, fixed discriminator parameter, repeat step 3.1 ~ 3.3 totally 2 times, then gain error backpropagation, adopt SGD optimization algorithm to update the parameter of generator, Make the loss of the generator Loss_G as small as possible, so that the quality of the subsequently generated image is higher and closer to the feature distribution of the real image. Of course, the preset standard can also be set to a certain fixed value according to actual needs.
The crack recognition method based on deep learning provided by the present embodiment realizes the automatic generation of crack by building deep convolution confrontation network, solves the problem of insufficient training set data samples in the crack recognition method based on deep learning, and increases After increasing the diversity of the data set samples, input the improved YOLOv4 neural network training to obtain the crack recognition model to recognize the real-time collected crack images to be detected, ...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention provides a crack identification method based on deep learning, and belongs to the technical field of image processing, and the method specifically comprises the steps: constructing a deep convolutional adversarial network; acquiring a plurality of real crack image samples to train the deep convolutional adversarial network to obtain an adversarial crack image sample; obtaining a mixed crack image sample; labeling all the mixed crack image samples, and taking the labeling information file and the mixed crack image samples as a training set; training the improved YOLOv4 neural network by adopting the training set until the network converges, and storing convergent network parameters as a crack identification model; and inputting the acquired to-be-detected crack image into the crack identification model, and outputting identification information. Through the scheme of the invention, the deep convolutional adversarial network is constructed to realize the automatic generation of the crack, and the crack recognition model is obtained through the improved YOLOv4 neural network training to recognize the to-be-detected crack image, so that the recognition efficiency, accuracy and adaptability are improved.

Application Domain

Technology Topic

Image

  • Crack identification method based on deep learning
  • Crack identification method based on deep learning
  • Crack identification method based on deep learning

Examples

  • Experimental program(1)

Example Embodiment

[0058] The present application will be described in detail below with reference to the accompanying drawings.
[0059] The embodiments of the present application will be described below by way of specific examples, and those skilled in the art can easily understand other advantages and efficacy of the present invention. Obviously, the described embodiments are merely the embodiments of the present invention, not all of the embodiments. The present application can also be implemented or applied by other specific embodiments, and the details in this specification can also be made based on different perspectives and applications, and various modifications or changes are carried out from the spirit of the present application. It should be noted that the features in the following examples and embodiments may be combined with each other in the case of unlilled. Based on the embodiments in the present application, all other embodiments obtained without creative labor are not made in the pre-creative labor premises.
[0060] It should be noted that various aspects of embodiments within the scope of the appended claims are described below. It should be apparent that aspects described herein can be embodied in a wide variety of forms, and any particular configuration and / or function described herein is illustrative only. Based on the present application, those skilled in the art will appreciate that one of the aspects described herein can be implemented independently of any other aspect, and both of these aspects may be combined in various ways. For example, the device and / or practice method can be implemented using any number aspects set forth herein. Additionally, this method can be implemented using other structures and / or functional implementation of other structures and / or functionality other than one or more of the aspects described herein.
[0061] It will also be noted that the illustrations provided in the following examples will only explain the basic contemplative of the present application, and only the components associated with the components related to the present application are displayed in accordance with the number of components, shapes, and in accordance with the actual implementation. Dimensions, the type, number, and proportion of each component can be a random change in the actual implementation of the actual implementation, and its component layout may be more complicated.
[0062] In addition, in the following description, specific details are provided for easy thorns. However, those skilled in the art will appreciate that the aspects can be practiced without these specific details.
[0063] The present application provides a method of depth learning-based crack identification, which can be applied to the crack detection recognition process in the infrastructure detection scenario.
[0064] See figure 1 , A flow chart of a depth learning-based crack identification method provided herein. like figure 1 As shown, the method mainly includes the following steps:
[0065] Step 1, construct a depth convolution counter network, wherein the depth convolution counter the network includes a generator and a discriminator;
[0066] Optionally, the loss function of the generator and the discriminator is designed according to the WASSERSTEIN distance, wherein the WASSERSTEIN distance is defined as
[0067] , Distributed for the real crack image sample, For the anti-crack image sample distribution, X is the real crack image sample, Y is the anti-crack image sample, Joint distribution The real crack image sample is desirably at the distance between the anti-crack image sample.
[0068] Further, the generator includes a reverse film having six convolutionary nuclear sizes of 4 × 4, wherein the first five anti-wound layers are set after a standardized Batch Nomalization function and a RELU activation function, the last one. The TANH activation function is provided after the reverse layer layer.
[0069] Optionally, the discriminator includes a convolution layer having six convolutionary nuclear sizes of 4 × 4, wherein the first five of the cosmetic layers are provided with a standardized Batch Nomalization function and the Leakyrelu activation function, and the last one said Set the Sigmod activation function after the convolution layer.
[0070] In particular, in consideration of the existing means typically perform image recognition directly on real-time acquisition, the detection efficiency is not high, and the interference of environmental factors such as background or light can cause differences in detection accuracy. The depth convolution counter the image can be constructed, wherein the depth convolution counter the network includes the generator G and the discriminator D, the generator and the damper loss function according to WASSERSTEIN Distance design, where the WASSERSTEIN distance is defined as , Distributed for the real crack image sample, For the anti-crack image sample distribution, X is the real crack image sample, Y is the anti-crack image sample, Joint distribution The real crack image sample is desirably at the distance between the anti-crack image sample.
[0071] The structure of the generator G and the determinator D figure 2 As shown, the generator is like figure 2 (A), shown, including 6 convolutionary nuclear sizes of 4 × 4 reverse layer RECONV, where the top 5 anti-convolution layers are set after a batch of standardized Batchomalization function and the RELU activation function, and last 1 anti-convolution The Tanh activation function is set after the layer. Judgment figure 2 (B), shown in the convolution layer CONV of 4 × 4, including 6 gauges, where the first 5 convolution layers are set after the standardization BATCH NOMALIZATION function and the Leakyrelu activation function, and the last consolidated layer Set the SigMod activation function to increase the processing efficiency of the image. Of course, the structure of the generator and the discriminator can also be adjusted according to the actual needs, and will not be described later.
[0072] Step 2, acquire multiple real crack image samples to train the depth volume against the network, resulting in a counter crack image sample similar to each of the real crack image samples;
[0073] Optionally, the size of the anti-crack image sample is 128 × 128.
[0074] Further, such as image 3 As shown in step 2, acquire a plurality of true crack image samples to train the depth volume against the network to obtain a counter-resistant crack image sample similar to each of the real crack image samples, including:
[0075] Step 2.1, initialize the parameters of the generator and the discriminator;
[0076] In particular, when the generator and the judgment are constructed, the parameters of the generator and the discriminator can be initialized.
[0077] Step 2.2, randomly generate no noise having a size of 100 dimension, and all of the noise input into the generator, output N random crack image samples;
[0078] For example, a random generates 12 sizes of 100-dimensional noise z, and then all of the noise input into the generator G, outputting 12 of the random crack image G (Z).
[0079] Step 2.3, extracting N of the true crack image sample and N of the random crack image sample input the discriminator, outputting the probability P of the image truth ratio, and calculates the loss function value of the judifter and the The loss function value of the generator;
[0080] Optionally, the calculation formula of the loss function value of the discriminator is
[0081] The calculation formula of the loss function value of the generator is ,in, Image samples for the real cracks, Image samples for the random cracks.
[0082] For example, 12 real crack sample X is extracted from the true crack image sample, and it is input to the discriminator D together with the random crack image sample generated in step 3.2, and the probability P, which is discriminated, and calculates according to the probability. Loss of the judgment Loss_d And the loss of the generator Loss_g.
[0083]Step 2.4, the generator of fixed parameters, error back-propagation, using the parameters of the optimization algorithm SGD discriminator is updated, so that the loss of the discriminator function value satisfy the predetermined criteria, after updating the discriminant weight weight parameter is truncated to the interval [-a, a];
[0084] For example, to generate the fixed parameters, back propagation, using optimization algorithm parameters SGD discriminator is updated so that the loss of classifiers Loss_d As small as possible, the update will discriminator weight parameter is truncated to the interval [-0.01,0.01].
[0085] Step 2.5 Repeat steps 2.1 to 2.4 times a total of k, the discriminator parameter fixed, repeating steps 3.1 to 3.3 times a total of j, then after the resulting error back propagation, the generator parameters optimization algorithm employed SGD update, so that the loss function value generator meet the preset criterion;
[0086] For example, repeat the process from 2.1 to 2.4 total of four times, determines the fixed parameters, repeating steps 2 times 3.1 to 3.3, and the resulting error back propagation after using SGD optimization algorithm parameter generator updates, such generator Loss Loss_g As small as possible, so that the image quality of the generated subsequent higher and more close to the true distribution of the image feature. Of course, the predetermined criterion may be set to a fixed value according to the actual needs.
[0087] Step 2.6, step 2.1 to 2.5 cyclically repeated until the judgment probability approaches 0.5, this time is stored parameters of the generator;
[0088] In specific implementation, the cycle can continue to repeat steps 2.1 to 2.5, until the determined probability P close to 0.5, i.e., determines whether the image is not real input sample or sample cracks cracks against the sample, and this time is stored in the parameter generator, such that the image generator generates the fracture can be approximated to the real image of the sample.
[0089] Step 2.7, using the training generator generates each of the real image of the sample distribution similar fracture cracks against sample images.
[0090] For example, after saving the parameter generator, you can use the training generator generates as cracks and each image sample of the real Figure 4 (A), the distribution of the close against the crack image samples generated against the image portion, such as cracks Figure 4 (B) shown.
[0091] Step 3, the image samples against crack fracture of the real image of each sample corresponding thereto in accordance with one embodiment of comparison and mixed to obtain a mixed image of the sample fracture;
[0092] Specifically embodiment, after obtaining the sample image against cracks, fractures may be images of the sample against a real image sample cracks and the corresponding 1: 1 ratio, to obtain a mixed image sample fracture.
[0093] Step 4, mixing of all of the image samples are labeled cracks and fractures label image information file, and mixing the training set of samples;
[0094] For example, software may be employed, for example, labeling the mixed image samples LabelImg cracks are labeled, the resulting information after the label file type xml, and then mixing the label information file and image samples as the training set cracks, the training set comprising a total of 1,000 crack the image 1000 and corresponding label information file.
[0095] Step 5, using the training set to train the neural network to improve YOLOv4 until the network converges, and store the network converges to fracture parameter recognition model;
[0096] Optional, such as Figure 5 Shown, the improvement YOLOv4 neural network comprises:
[0097] The image input module for fracture image size normalization;
[0098] EfficientNet backbone network feature extraction, for cracks using feature extraction, feature an effective output layer;
[0099] Wherein the reinforcement module, comprising a module and a SPP PANet module for layer characterized in fractures backbone network feature extraction reinforce, strengthen and wherein the output layer, wherein reinforcing layer convolution module comprises a standard volume separable convolution and a depth product;
[0100] Prediction module, wherein reinforcing layer according to the information contained in the prediction determination to obtain results of the prediction.
[0101] In specific implementation, the trunk EfficientNet feature extraction module structure of the network core MBConvBlock as Image 6 Shown, MBConvBlock comprises a large and a small residual residual side edges, the network is to avoid overfitting and explode or gradient problems disappear. Wherein the input to MBConvBlock layer structure, passes through the 1 liter dimensional convolution operation, a separable convolution depth times, 1 times the average global pool operation, operation reshape 1, comprising 1 volume of SE attentional mechanisms module and 1 product operation after the output of the convolution operation for the new dimension reduction feature layers.
[0102] Meanwhile, the SPP module can be used in three different cell layers of scales, are pooled nuclear size 5 × 5,9 × 9,13 × 13, operation of the three cell fusion Concat layers of the rear cell output by scale conduct channel stitching.
[0103] On the basis of the above embodiments, such as Figure 7 Shown in step 5, using the training set to train the neural network to improve YOLOv4 until the network converges, and store the network parameters after identification model converges to fracture, comprising:
[0104] Step 5.1, preprocessing the training data set in the input module;
[0105] For example, the pre-fracture data is unified at 416 × 416 × 3.
[0106] Step 5.2, the processed data set is input to the modified backbone network feature extraction module YOLOv4 feature extraction three dimensions are obtained and characterized active layer 52 × 52,26 × 26,13 × 13's;
[0107] The processed data is then input to the set of features to improve YOLOv4 backbone network feature extraction module extracting three dimensions are obtained and characterized active layer 52 × 52,26 × 26,13 × 13 in.
[0108] Step 5.3, wherein the active layer is input to the neural network, improved features YOLOv4 module wherein the reinforcing strengthening and outputs the resultant three sizes are 52 × 52,26 × 26,13 × 13 wherein reinforcing layer;
[0109] After obtaining the active layer feature, wherein the active layer may be input to the neural network, improved features YOLOv4 module wherein the reinforcing strengthening and outputs the resultant three dimensions of 52 × 52,26 × 26,13 × respectively characterized in reinforcing layer 13, of course, different from the above feature sizes layer may actually need to be set and adjusted, not enumerated herein.
[0110] Step 5.4, wherein the reinforcing layer is input to the improved prediction module YOLOv4 network, outputs the resulting prediction result, and calculates a prediction error Loss_Y;
[0111] Wherein, in the step of the prediction error from the position error Lloc 5.4 Loss_Y error and confidence Lconf composition, particularly the position error Lloc following composition:
[0112]
[0113] in, The minimum rectangular area enclosed diagonal distance Euclidean distance is the center point of the predicted block and the real block, c is simultaneously true and complete block containing the predicted frame, IoU intersection is the ratio of the actual and predicted frames and frame sets, and , v is calculated as follows:
[0114]
[0115] in: Width and height of the real block, w, h is the width and height of the predicted block;
[0116] Confidence error Lconf specific composition as follows:
[0117]
[0118] Wherein, N is the number of classes, Error weights parameter, Denotes the i th grid box contains the j-th prediction object, Denotes the i th grid j-th prediction box does not contain the object, Is real confidence, Confidence is forecast.
[0119] For example, the prediction module confidence threshold may be set to 0.5, wherein after obtaining the reinforcing layer, wherein the reinforcing layer to improve the prediction input module YOLOv4 network, outputs the resulting prediction result, and calculates a prediction error Loss_Y.
[0120] Step 5.5, error back-propagation, the neural network itself YOLOv4 improved structure to adjust the parameters based on gradient descent;
[0121] After obtaining the prediction error, said error back propagation may be performed, the neural network to improve YOLOv4 gradient descent method to adjust the parameters of its structure.
[0122] Step 5.6, cyclically repeated steps 5.1 to 5.5 above, until the prediction error tends to converge, network training is completed, and to save the fracture network identification model parameters converge.
[0123] Specific implementation, taking into account an adjustment may not be obtained accurately model parameters, the above steps can be cyclically repeated 5.1 to 5.5, until the prediction error tends to converge, network training is completed, and stores the network parameter converges to the fracture model identification .
[0124] Step 6, the collected image input to be detected the crack fracture identification model, identification information to be output to the detection of cracks in the image fracture.
[0125] Identification information for example, cracks can capture images in different environmental conditions and different shapes characterized in fractures as the image to be detected, and then the input image to be detected fracture the fracture identification model, the output of the fracture to be detected, such as cracks in the image Figure 8 Shown, the crack fracture model image recognition for different environmental conditions and the characteristics of different shapes, such as Figure 8 (A), Figure 8 (B), Figure 8 (C), Figure 8 (D), Figure 8 (E) and Figure 8 As shown in (f), can accurately identify efficient fissure.
[0126] The present embodiment provides a method of identifying crack depth study based on depth achieved by building automation networks against the convolutional generation of cracks, crack identification method based on the solution depth learning insufficient training data sample problem, and adds the data set improved YOLOv4 input neural network training samples obtained after the diversity fracture identification model to be collected in real time flaw detection image recognition, improve the recognition efficiency, accuracy and adaptability.
[0127]It should be understood that the part of the present application can be implemented in hardware, software, firmware, or a combination thereof.
[0128] As described above, only the specific embodiments of the present application, but the scope of the claims is not limited thereto, and any technicress, those skilled in the art, can easily think of change or replacement within the technical scope of the present application.It should be covered within the scope of protection of this application.Therefore, the scope of protection of the present application shall be subject to the scope of protection of the claims.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Website automated testing method and automated testing system

ActiveCN103873318AImprove recognition efficiencyImprove automated testing efficiencyData switching networksData miningMiddleware
Owner:ALIBABA GRP HLDG LTD

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products