Visual detection-oriented target detection model training method and target detection method

A technology of target detection and visual detection, which is applied in semi-supervised learning, target detection, and target detection fields, can solve the problems of consuming a lot of manpower and material resources, and achieve the effect of excellent fault tolerance and good real-time monitoring performance

Pending Publication Date: 2022-08-09
北京师范大学珠海校区
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem that using a fully supervised target detection model in mobile phone appearance defect detection requires a lot of manpower and material resources for data labeling, the present invention proposes a visual detection-oriented target detection model training method and a target detection method; the present invention is based on semi-supervised The target detection training module Fix-YOLOX (Fixmatch-You Only Look OnceX), uses a small amount of labeled data for fully supervised training, and adds a semi-supervised training module using two methods of pseudo-labeling and consistency regularization, using unlabeled The data avoids overfitting of fully supervised training, improves the generalization of the model and the fault tolerance to labeled data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual detection-oriented target detection model training method and target detection method
  • Visual detection-oriented target detection model training method and target detection method
  • Visual detection-oriented target detection model training method and target detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The present invention will be further described in detail below with reference to the accompanying drawings. The examples are only used to explain the present invention, but not to limit the scope of the present invention.

[0044] figure 1 It is a schematic diagram of the steps of the pseudo-label training method:

[0045] (1) Use the labeled data to train the selected target detection model to obtain initial weights.

[0046] (2) Using the initial weight parameters, input unlabeled data for prediction.

[0047] (3) Obtain high-confidence annotation results as pseudo-annotations for unlabeled data.

[0048] (4) Input the labeled data and unlabeled data together into the model for retraining.

[0049] figure 2 It is a schematic diagram of consistency regularization: under the constraint of consistency regularization, the results obtained by two different forms of letter A after inputting the model prediction should be A, and the results should be consistent.

[0050...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a target detection model training method for visual detection and a target detection method. The training method comprises the following steps of: 1) selecting a plurality of annotated data and non-annotated data during each iterative training; 2) inputting the labeled image sample data into a target detection model for training, and obtaining a prediction result of each labeled image sample data; 3) loss calculation is carried out according to the prediction result and the corresponding label, and the loss Ls of supervision training is obtained; 4) performing weak enhancement and strong enhancement on each piece of unlabeled image sample data; 5) inputting the weakly enhanced sample data into the target detection model for prediction, and taking an obtained prediction result as a pseudo label corresponding to the strongly enhanced sample data; 6) inputting the strongly enhanced sample data into the target detection model for prediction, and performing loss calculation according to an obtained prediction result and a corresponding pseudo label to obtain a loss Lu of unsupervised training; and 7) adjusting parameters of the target detection model according to Ls and Lu.

Description

technical field [0001] The invention relates to the field of semi-supervised learning and target detection, in particular to a target detection model training method and target detection method oriented to visual detection. Background technique [0002] At present, most of the target detection models are fully supervised models, such as the target detection models of the "You Only LookOnce" (YOLO) series. The YOLO series is a single-stage target detection model that is biased towards speed. It unifies the target detection into a regression problem and greatly reduces the parameters required for prediction, so it can obtain the performance of high-speed detection, but the detection accuracy will be due to the reduction of the number of parameters. Decline. At present, the latest "You Only Look Once X, YOLOX version" of the YOLO series has the characteristics of fast detection speed and high precision, but it still requires a large amount of manually labeled data for training...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V10/774G06V10/764G06V10/766G06V10/82G06N3/04G06N3/08
CPCG06V10/7753G06V10/764G06V10/766G06V10/82G06N3/084G06N3/088G06V2201/07G06N3/045
Inventor 杨戈周祺峰
Owner 北京师范大学珠海校区
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products