Intelligent detection method based on graph neural network

A neural network and intelligent detection technology, applied in the field of intelligent detection based on graph neural network, can solve the problems of high time cost and economic cost for inspectors, easy to produce misjudgment and missed judgment, loss, etc., to reduce labor costs and The effect of detecting cost, improving accuracy and efficiency, and improving expression ability

Pending Publication Date: 2019-11-05
TONGJI UNIV
4 Cites 15 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0005] (1) It takes a long time to manually judge the debugging results, and it is easy to cause misjudgment and missed judgment
[0006] (2) It takes a lot of time and economic cost to train qualified inspectors
[0007] (3) There is no process of recording and s...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

[0068] In order to make the input data format conform to the input of the graph convolutional network, it is necessary to describe the cleaned data with a Laplacian matrix. The Laplacian matrix used in the present invention is the degree diagonal matrix of all nodes minus the difference matrix of the adjacency matrix, which is easy to implement and has good interpretability, that is, the value of each node in the matrix represents his contribution to the entire network The size of the influence weight.
[0075] In order to strengthen the degree of influence of global information on the results, an attention mechanism is used to fuse local inform...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention provides an intelligent detection method based on a graph neural network. The intelligent detection method comprises the following steps of collecting data, preprocessing the data, building a network model, carrying out pre-training and transfer learning, carrying out predicting and performing casual inspection verification to perfect a whole prediction system. Compared with manual detection, the method has the advantages that the component detection accuracy and efficiency are improved, the interference of human factors on detection is reduced, and the labor cost and the detection cost are reduced. Compared with a traditional machine learning method, the method has the advantages that the graph neural network does not require that the composition form of the data must have agood spatial relationship, that is to say, the graph neural network has a neatly arranged matrix form, and the feature that the graph neural network can accept unstructured input significantly improves the expression ability of the model. Compared with a convolutional neural network method, the graph neural network can better learn the logic relationship of each element, so that the generalization ability of the model is improved. In the learning process of the network, each node is responsible for spreading own information and integrating information of neighbor nodes, so that the logic normal form of data is learned and mastered.

Application Domain

Technology Topic

Image

  • Intelligent detection method based on graph neural network
  • Intelligent detection method based on graph neural network
  • Intelligent detection method based on graph neural network

Examples

  • Experimental program(1)

Example Embodiment

[0050] Example 1:
[0051] like figure 1 and figure 2 As shown, the intelligent detection technology based on graph neural network in this embodiment specifically includes the following steps:
[0052] 1. Sample Collection
[0053] Sample collection is a critical step in the entire automated inspection process for components such as elevator traction motors. Precise sample data is required for model training, migration, and prediction.
[0054] The inspection indicators of elevator traction motor components include temperature, humidity, weight, volume, vibration, life, imaging, rated current and voltage, and maximum torque, a total of nine items. specifically,
[0055] The temperature, as a standard to measure the operating state of the component, needs to be detected by a thermometer and output the parameter t when the component is in the standby state and the carrying state respectively.
[0056] The humidity, as a standard for measuring the internal environment of the component, adopts the output parameter m of the humidity sensor.
[0057] The weight, as a standard for measuring whether the parts are installed completely, adopts the output parameter w of the electronic weighing instrument.
[0058] For the volume, measure the size first, then measure each area, then calculate the volume of the entire product according to the volume division and use geometric equations, and output the volume parameter v obtained by the final calculation.
[0059] For the vibration, a vibration meter is used to output the number of vibrations per unit time, a frequency curve is fitted, and a vibration frequency f is output.
[0060] For the life, the parameter value a is statistically obtained under continuous full-load working conditions, intermittent full-load working conditions and normal working conditions.
[0061] The image i, as an index to measure the surface quality of the elevator traction motor components, includes pictures taken on various planes of the elevator traction motor components under the illumination of multiple light sources, because it is difficult to detect the unevenness of the surface under a single light source.
[0062] The rated current and voltage, as a measure of the load capacity of the elevator traction motor components under different power sources, need to be tested when the rated current and voltage are 80%, 90%, 100%, 110% and 120% of the standard state respectively. j , rated voltage vr j.
[0063] The maximum torque torque max , used to measure the elevator traction motor components can bear the number of people carrying indicators.
[0064] Sample collection also includes the label of the sample, that is, the test result corresponding to a set of test data: qualified Y/unqualified N.
[0065] The detection indicators collected above constitute the original data.
[0066] 2. Data preprocessing
[0067] The inspection data of the elevator traction motor collected according to step 1 may be missing, and the cost of re-inspection is high. At this time, data cleaning is required. For the case where only some detection indicators are missing, the mean and variance of the indicators are counted, a standard normal distribution is established, and samples are taken from the distribution as the missing indicator values. In the case of serious loss of detection indicators, this batch of data is discarded directly.
[0068] In order to make the input data format conform to the input of the graph convolutional network, the cleaned data needs to be described by the Laplacian matrix. The Laplacian matrix used in the present invention is the degree diagonal matrix of all nodes minus the difference matrix of the adjacency matrix, which is easy to implement and has good interpretability, that is, the value of each node in the matrix represents his contribution to the entire network The size of the influence weight.
[0069] like image 3 As shown, each node in the graph is a detection index (temperature, quality, humidity, volume, vibration, life, imaging, etc.). The graph model in the graph has been simplified, but redundant connections between nodes can be preserved during training. The calculation process of the Laplace matrix of the data is shown in Table 1.
[0070] The calculation process of the Laplacian matrix of the above data in Table 1
[0071]
[0072] 3. Model training
[0073] The graph neural network of the present invention uses DeepWalk for reference, that is, the node sequence of a certain node itself and its neighbor nodes is used for weighted summation, thereby realizing the extraction of local information.
[0074] like Figure 4 As shown, the hollow circle represents the detection index that has not been learned, the thin solid line represents the index relationship that has not been learned, the solid circle represents the learned detection index, and the thick solid line represents the learned index relationship. As the depth of the network deepens, the number of detection indicators and the number of relationships learned also increase, and finally the information extraction of the entire network and the learning of the logical paradigm are completed. Next, the feature layer is converted into several fully connected layers and the SoftMax layer outputs the probability that the component is qualified.
[0075] In order to strengthen the degree of influence of global information on the results, an attention mechanism is used to fuse local information and global information. There are three ways to enhance global information, matrix element addition, matrix element multiplication and matrix splicing. Due to the sparsity of part detection data, global information can be fused more effectively by splicing. Specifically, first use 1x1 convolution to reduce the number of channels of the feature map, then convert it into a fully connected layer, and splicing it to the subsequent fully connected layer. In this way, the global information can be effectively fused to prevent the loss of information caused by too fast data compression and the loss of global information with backpropagation.
[0076] The training of the model has two stages, namely the pre-training stage and the transfer learning stage. In the pre-training stage, when the model is trained with generated data, the parameters of all network layers will be updated; in the transfer learning stage, when the model is updated with real data, the first few layers are fixed, and only the parameters of the next few layers are updated. Because the first few layers in the deep network model are mainly responsible for extracting features, the latter few layers are responsible for specific classification or regression tasks, and the generated data and real data have a high degree of consistency in feature distribution, so fixing the parameters of the first few layers in transfer learning can Ensure the accuracy and efficiency of feature extraction. At this point, the forward propagation is completed. The backpropagation process needs to calculate the cross-entropy loss between the predicted probability value and the real label and update the network parameters according to the gradient calculated by the loss.
[0077] The nature of the end-to-end model is that a single model is responsible for the realization of all functions such as feature extraction and attention connection, and the model parameters are updated synchronously. Since the graph neural model built by the present invention is an end-to-end model, each module is connected, and the logical paradigm of the data is automatically derived, that is, the dependency relationship of each detection index is expressed. After training, the network can learn that among the various detection indicators of the elevator traction motor, the weights of vibration, temperature and life are relatively large. Combined with the test results, it can be found that when the vibration frequency is significantly greater than 10kHz, the temperature will also be higher than the normal range of 60-70°C, and the service life is about 70% of the normal life. The test results are more inclined to consider the components unqualified.
[0078] The conditional adversarial generative network based on soft labels is similar to the original adversarial generative network, which requires a generator and a discriminator. The generator is responsible for generating data that is as real as possible, so that the discriminator can discriminate the generated data as real data; the discriminator is responsible for discriminating between real data and generated data, and needs to be as correct as possible. The input of the conditional confrontation generation network based on soft labels is a probability value, which indicates the probability that the corresponding component of the generated detection data is qualified. The network can generate labeled and diverse data.
[0079] 4. Test result prediction
[0080]After obtaining the test data according to the data sampling method in step 1 and the preprocessing method in step 2, input the data into the graph neural network for prediction. It should be noted that, taking binary classification detection as an example: the final output of the model is a probability value. The closer the probability value is to 1, the greater the possibility that the component is qualified. Conversely, the closer the probability value is to 0, the possibility that the component is unqualified The greater the sex. From another perspective, it can also be regarded as the confidence of the model for the prediction results. The closer to 1 or 0, the higher the confidence. Therefore, in order to further improve the detection accuracy, only when the probability value P>=95% or P
[0081] 5. Sampling inspection
[0082] In order to ensure the accuracy of the model, manual sampling inspection is required for the results of automatic prediction detection. Specifically, for each batch of data, in order to ensure sample balance, 10% of the positive and negative prediction samples are selected for manual sampling inspection. If the manual detection results are different from the model prediction results, these misclassified samples are used to transfer the model to improve the accuracy of the model, that is, only the parameters of the last two layers of the graph neural network layer and the subsequent fully connected layers are updated.
[0083] If the sampling results are not the same as the predicted results, there are two cases, qualified products are predicted to be unqualified and unqualified products are predicted to be qualified. In the present invention, the unqualified product is predicted as qualified as a more serious error, so the samples making the first type of error have a greater update degree to the model. In the update of the model by two types of error samples, only the parameters of the last two layers of the graph neural network layer and the subsequent fully connected layers are updated, because the size of the classification error sample is too small at this time, and if the entire network is updated, it will lead to overfitting .
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

  • Improve efficiency and accuracy
  • Reduce intervention
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products