AI (Artificial Intelligence) based low-confidence sample processing method and system of board sorting

A low-confidence, processing method technology, applied in the fields of instruments, character and pattern recognition, computer parts, etc., can solve the problems of low-confidence samples, limited source of wooden boards, judgment and classification, etc., to improve training efficiency and improve sorting. The effect of accuracy and high classification accuracy

Active Publication Date: 2018-03-23
7 Cites 10 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, in the above board sorting scenario, the classification of each factory is customized, and its board source is also limited
Therefore, the acquisition of training data is difficult to be easily satisfied
When the machine learning method is running, a sample with low confiden...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

As shown in Figure 3, not only present the image data of low confidence level sample, also present the image data of high confidence level sample simultaneously, by comparing high confidence level sample data and low confidence level sample data simultaneously, thereby make operator It is easier to compare and re-calibrate low-confidence sample images.
[0073] In the present invention, a machine learning method using wood classification is directed. The method of machine learning can automatically classify the boards into custom categories, which is greatly improved compared with the speed of man...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more


The invention provides an AI based low-confidence sample processing method and system of board sorting. Image data of at least one format of a low-confidence sample is obtained; an image of at least one format of the low-confidence sample is presented in a display device; a new class marked by the low-confidence sample is obtained; and a training method is input to the marked low-confidence sample, and a new classification model is obtained via re-training. According to the method and system, the low-confidence sample can be discovered continuously and utilized, so that the classification precision of a machine learning method is improved gradually.

Application Domain

Technology Topic

Machine learningLow Confidence +6


  • AI (Artificial Intelligence) based low-confidence sample processing method and system of board sorting
  • AI (Artificial Intelligence) based low-confidence sample processing method and system of board sorting
  • AI (Artificial Intelligence) based low-confidence sample processing method and system of board sorting


  • Experimental program(5)

Example Embodiment

[0074] Example 1
[0075] Such as figure 1 As shown, a wooden board is sent into the image collection area through a conveyor belt. The wooden board completes image acquisition during its movement. The imaging device collects an image of the wooden board and inputs the collected image into a trained machine learning model.
[0076] For the machine learning method, first, a part of the wood board samples need to be obtained inside the factory, and the classification of each wood board sample. Because wood is a semi-natural product, it is impossible to have a clear classification standard like steel and other industrial products. Therefore, currently in the factory, custom classification is carried out according to the actual situation of the factory. This custom classification method is more suitable for the actual situation and classification requirements of different wood board factories, and the classification is more flexible and convenient. The specific implementation of classification is done by humans relying on experience. How many categories are set up and which samples fall into which category are all implemented manually. Manual classification can be done based on the different characteristics of the board, such as color, texture, defects and other arbitrary board features. Therefore, it can be seen that this classification is based on the needs of each factory, a custom wood classification, rather than a general classification that can be defined in advance.
[0077] The specific method is to first determine the wood board samples, and then customize the wood board samples. For example, divide the 1-3 wood board samples into A-level categories, 4-8 wood boards into B-level categories, and 9-10 wood boards. Classification C-level category.
[0078] Since it is a custom classification, it can be customized according to the specific conditions of the wood board factory and the actual classification requirements. For example, the 1, 3, and 5 wood boards are divided into A-level categories, and the remaining wood board samples are divided into B-level categories.
[0079] Note that with the development of technology, sample classification is not limited to manual experience, and the classification method of clustering in machine learning can also be used to automatically classify wood board samples.
[0080] Then, image acquisition is performed on the wood sample. The shooting device collects wood sample images under certain natural and/or artificial lighting conditions. The following is a set of data using the above classification method:
[0081] Sample 1: Category A
[0082] Sample 2: Category B
[0083] Sample 3: Category A
[0085] Sample N: Category C
[0086] Next, image data is used as input, and the machine learning model will get an estimate of confidence in each category. In the implementation of the classification process, the trained machine learning model will be evaluated for confidence in different types. The confidence level reflects the guess that the sample of wood planks should be classified into a certain category, for example
[0087] Sample 1:
[0088] Category: {A: 95%, B: 3%, C: 2%}
[0089] Sample 2:
[0090] Category: {A: 49%, B: 50%, C: 1%}
[0091] Among them, the confidence of Sample 1 under the A category is much higher than the confidence of other categories, which means that the board should be classified into the A category with a very high probability. We call this sample with a higher confidence result for a certain category as a high confidence sample, and at the same time, through a discrimination of the confidence, the high confidence sample can be divided into corresponding classifications by mechanical devices.
[0092] However, because the wood board sample is a semi-natural product rather than a standardized industrial product, in many cases a wood board has a relatively unique pattern or color, and a result similar to Sample 2 may be produced at this time. The specific feature is that the confidence of the A category is similar to the confidence of the B category (49% vs 50%), and the confidence of any one category is not much higher than the other categories. In other words, the trained model cannot accurately predict that the sample should be classified into A or B category. We call such samples with similar confidence levels for multiple classifications as low-confidence samples.
[0093] The generation of low-confidence samples indicates that the trained machine learning model cannot cope with the specific image characteristics of the sample. One reason is that the data sample used to train the machine learning model is not sufficient to cover the particularity of the sample. Since machine learning models can improve their classification performance by continuously increasing training data, low-confidence samples are very valuable data resources. At the same time, due to the aforementioned process, all classifications are customized by a certain factory, so low-confidence sample data cannot be obtained from other factories. This means that the low-confidence sample data generated in a factory's internal classification method has extremely high value for the classification method used by the factory. In the following section, we will introduce in detail how to use these low-confidence sample data to iteratively improve the performance of the machine learning model used.
[0094] First, by setting a discriminant condition for judging low-confidence samples, the discriminant condition can determine whether the current sample is a low-confidence sample by analyzing the confidence value. For example, if a threshold is set, when the confidence of no classification exceeds the threshold, it is considered as a low-confidence sample. Or, set a difference value. When the confidence difference value of multiple classifications is less than the threshold value, it is considered as a low-confidence sample.
[0095] Such as figure 2 It is shown that when a low-confidence sample is found, the system displays the data of the low-confidence sample on a display device, and re-categorizes and label the low-confidence sample. The labeled low-confidence samples are used to input the trained machine learning model for a new round of training. The classification performance of the machine learning model trained with low-confidence samples will be further improved. The receiving classification identifier can be achieved by obtaining manual classification.
[0096] After obtaining multiple low-confidence samples, there are multiple ways to train machine learning models. One method is to combine the new samples obtained for classification and labeling with the original samples, and train a brand new machine learning model. One method is to collect a certain amount of labeled low-confidence samples. Since machine learning can achieve iterative evolution through batch training, these labeled low-confidence samples can be used as new training batches to retrain the original machine learning model.

Example Embodiment

[0097] Example 2
[0098] In the process of manual labeling, because the low-confidence samples themselves have certain ambiguities, even manual classification faces certain challenges. Therefore, how to better present these samples to the operator determines the labeling accuracy of low-confidence samples. Here, we introduce the following implementation methods to describe specific presentation methods.
[0099] Such as image 3 As shown, not only the image data of the low-confidence samples are presented, but also the image data of the high-confidence samples are presented at the same time. By comparing the high-confidence sample data and the low-confidence sample data at the same time, it makes the comparison easier for the operator , Re-calibrate the low-confidence sample image.
[0100] In order to make the comparison clearer, low-confidence samples and high-confidence samples can be displayed in the same interface, or manually labeled samples. In the interface of this method, a low-confidence sample is presented, and multiple high-confidence samples of multiple classifications are presented at the same time. At this point, the operator can easily select an optimal classification for low-confidence samples based on the comparison.
[0101] For more clarity, you can set the low-confidence samples to be presented together with the confidence values ​​obtained in each classification. The confidence value can provide a reference for the operator to know the reason for the low-confidence sample, for example, it is impossible to accurately distinguish between category A and category B.

Example Embodiment

[0102] Example 3
[0103] There is a possibility that the generation of low-confidence samples may be caused by changes in external ambient light, such as insufficient light intensity, or other light pollution entering the collected image. Therefore, one method preprocesses the sample, for example, by enhancing the original image based on the reference image, for example, normalizing parameters such as brightness, white balance, and contrast.
[0104] In order to eliminate the influence of illumination changes on image quality later, you can set a reference image during the image acquisition process. For example, in the image collection area, a white reference object is provided to ensure that the image of the wood sample and the image of the white reference object are collected at the same time. The white reference object can be used to provide a reference for white balance, brightness, or other image parameters. In one method, an external light source, such as an LED light source, is used during the image capture process. The light source can provide a uniform illumination to improve the basic brightness of the image. Such as Figure 4 As shown, a white reference image is set on the conveyor belt, and the image acquisition device (camera) is aimed at the area where the white reference image is located. When the wood board sample appears, the image acquisition device collects and records the white reference object and the image data of the wood board at the same time. In this way, a plank image data with a reference image is obtained.
[0105] In addition, in order to achieve more adaptive methods, other variables during image acquisition can also be included in the training samples, such as the lighting conditions during image acquisition and the speed of conveyor belt movement. This can get enhanced classification data:
[0106] Sample 1
[0107] [Category: A, speed: V2, light intensity: L3, camera angle, A5]
[0108] Sample 2
[0109] [Category: A, speed: V3, light intensity: L3, camera angle, A5]
[0110] Sample 3
[0111] [Category: B, speed: V0, light intensity: L3, camera angle, A5]
[0112] Sample 4
[0113] [Category: A, speed: V2, light intensity: L3, camera angle, A5]
[0114] Note that the training data is not limited to the above examples, and other relevant parameters that can be used can be selectively integrated into the training data.
[0115] When the training data contains a variety of optional parameters, the trained model can not only classify the types of planks, but also classify the corresponding parameters to achieve more accurate judgments. For example, the moving speed of the board can be judged, and the lighting conditions when the board image is collected, so as to avoid the impact of changes in the external environment on the classification.
[0116] In addition, regarding the presentation method, the presented samples are normalized to eliminate the influence of external light, so it is easier to compare with other high-confidence samples. Figure 5 An example of this kind of presentation is given. Among them, the original image of the low-confidence sample and the enhanced image are presented with the high-confidence sample at the same time. At this time, the operator can select the optimal classification based on the enhanced image. Note that the original image and the enhanced image of the high-confidence sample can also be presented on the imaging device at the same time, and no picture examples are given here.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more


no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

  • Improve classification accuracy
  • Improve training efficiency
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products