In order to make the technical problems, technical solutions and beneficial effects to be solved by the present invention more clear and comprehensible, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
 like figure 1 As shown, a convolutional neural network-based image noise detection method of the present invention includes the following steps:
 10. Collect sample images and manually label and classify them according to the type of noise;
 20. Normalize the classified sample images and input them into the convolutional neural network system to train the classification model;
 30. The system randomly collects sample image blocks in the target area of the sample image, and performs noise classification;
 40. Collect the wrongly classified sample image blocks and perform steps 10 and 20 in sequence until the expected result is exceeded, and the classification model trained in step 20 is determined as the best classification model; the expected result here refers to noise. The detection accuracy reaches a preset value, and the preferred preset value in this embodiment is 90%.
 50. Acquire the image to be detected, and randomly collect image blocks to be detected in the target area, and use the best classification model to classify the noise of the collected image blocks to be detected to obtain the noise type of the image to be detected.
 Preferably, the noise types include: colorful noise, medium color noise, low color noise, high grayscale noise, medium grayscale noise, low grayscale noise, and no noise.
 Preferably, in the step 20, the classified sample images are normalized, which is mainly to calculate the average color value of all the sample images, and then subtract the average color value from the color values of all the sample images to obtain the normalized value. Normalized sample image.
 In the described step 20, the classified sample images are input into the convolutional neural network system to train the classification model, which is mainly to randomly collect sample image blocks for the classified sample images, and obtain sample image blocks with classification labels and bring them with the classification labels. The convolutional neural network system is used for learning; and the wrongly classified sample images are collected and re-labeled, that is, when the type of noise automatically classified by the system is inconsistent with the type of noise classified by manual classification, it means that the manual labeling error or the system classification error, It is necessary to manually label and adjust the network structure, and then re-label the sample images for training and learning. Repeat the process of "training -> adjusting network structure -> retraining" until the classification is correct.
 In this embodiment, the grid structure sequence is input layer -> K sub-layers -> fully connected layer -> SoftMax layer, where K is greater than or equal to 1; the sub-layer includes convolution layer, activation layer, downsampling layer, normalization layer layer; the kernel size and output size of each layer in the convolution layer, activation layer, downsampling layer, and normalization layer can be adjusted arbitrarily, and each layer has an input and produces an output. The output of the layer serves as the input to the next layer.
 Wherein, the input size of the input layer is Height x Weight x Channel, wherein Weight and Height are the width and height of the input layer image, and Channel is the color channel of the input layer image; Due to the reason that the present invention uses GPU hardware to realize, Weight=Height; The channel of the input image can only be 1 or 3.
 Convolutional layer:
 1) The size of the kernel must be an odd number and not greater than the width or height of the input of the layer;
 2) The middle indicates that the width and height are not changed when passing through the convolutional layer, and the number of channels can be variable or unchanged; in theory, it can be any positive integer. Because the present invention uses GPU hardware, it is a multiple of 16 here.
 activation layer:
 1) The activation layer does not change the width, height or number of channels represented by the convolutional layer;
 2) The activation function used by the activation layer includes but is not limited to the following function types:
 f(x)=1/(1+e -x )
 f(x)=a*tanh(b*x), a, b are any non-zero real numbers
 f(x)=min(a, max(0, x))
 f(x)=log(1+e x )
 f(x)=x 2
 f ( x ) = x
 3) The activation layer follows the convolutional layer or the full connection.
 Downsampling layer:
 1) The downsampling layer does not change the number of channels represented in the middle;
 2) The reduction ratio of the downsampling layer to the image is the size of the kernel: that is, the downsampling layer with the kernel mxn will cause the intermediate representation to be reduced to (1/m)x(1/n) of the previous layer. In theory, m and n can be any natural number, and because the present invention uses GPU hardware to implement, m=n. For example, 15x15x32 becomes 5x5x32 after downsampling by 3x3; 15x15x32 becomes 3x3x32 after downsampling by 5x5; but 15x15x32 cannot be downsampled by 2x2 because 15 is not divisible by 2; not that the input size has to be Powers of 2, i.e. 16, 32, 64, etc., the input size only needs to be guaranteed to be sampled by all downsampling layers.
 Normalization layer:
 1) The normalization layer does not change any dimensions of the intermediate representation;
 2) The normalization layer is not necessary, whether it is necessary or not, adding a normalization layer usually improves the accuracy and increases the amount of calculation; whether to add a normalization layer depends on the actual increase in accuracy and the speed of loss after adding.
 The general combination is: convolution -> activation -> downsampling -> normalization.
 The following situations are special:
 1) When adding a normalization layer has a small improvement in accuracy but increases the amount of computation, cancel the normalization layer, that is, use the following combination: convolution -> activation -> downsampling;
 2) The normalization layer is advanced, and the effect is basically the same, that is, the following combination is used: convolution -> activation -> normalization -> downsampling.
3) Cancel the downsampling layer: convolution->activation; or convolution->activation->normalization; the essence of downsampling is to increase the robustness, and at the same time, it has the effect of reducing the amount of computation of subsequent layers; in a network There are usually several layers of downsampling, but not all "convolution->activation" is followed by downsampling.
 Fully connected layer:
 1) The intermediate representation after passing through the fully connected layer will become 1-dimensional instead of 3-dimensional;
 2) The output of the full connection can be arbitrary;
 3) Once fully connected, convolution, downsampling or normalization cannot be performed;
 4) After the full connection, the activation layer can be connected, or the full connection can be continued.
 SoftMax layer:
 After the full connection layer, the function is to turn the real value generated by the full connection into a probability between [0, 1].
 The network structure finally used in the present invention is shown in Table 1.
 Table 1 Convolutional Neural Network Structure
 In the step 50, the collected image blocks to be detected are classified by using the best classification model, and the main method is to put the image blocks to be detected into the convolutional neural network system to calculate each noise type of the image blocks to be detected. and select the noise type with the highest probability as the noise type of the image block to be detected. Specifically, randomly sample the target area in the image to be detected, put it into the input layer of the neural network, and after full connection, the probability of each label is obtained in the final SoftMax layer, that is, in the interval [0, 1] In this embodiment, according to the type of noise, it is divided into: colorful noise, medium color noise, low color noise, high grayscale noise, medium grayscale noise, low grayscale noise, no noise, a total of 7 types of noise Label, that is, 7 data, the sum of these 7 data is equal to 1; then, the obtained probability of the label of each image block to be detected is averaged to obtain the probability of the label of the image block to be detected, and the label with the highest probability is selected as The label of the noise type of the image block to be detected. In step 30, the system randomly collects sample image blocks in the target area of the sample image and performs noise classification, and the method for judging the type of noise is similar to the above.
 While the foregoing specification illustrates and describes preferred embodiments of the present invention, it is to be understood that the present invention is not limited to the form disclosed herein and should not be construed as an exclusion of other embodiments, but may be utilized in various other combinations, modifications and environments , and can be modified within the scope of the inventive concept herein, through the above teachings or skills or knowledge in the relevant field. However, modifications and changes made by those skilled in the art do not depart from the spirit and scope of the present invention, and should all fall within the protection scope of the appended claims of the present invention.