An automatic radiotherapy target segmentation method based on self-supervised learning
A supervised learning and automatic segmentation technology, applied in the field of image processing, can solve the problems of large labor cost and delineation cost, increased time cost and equipment cost, less and expensive medical image data, etc., achieve fast convergence speed, save labor and time cost effect
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0049] like figure 1 As shown, this embodiment provides an automatic segmentation method based on self-supervised learning radiotherapy targets, including the following steps:
[0050] Step 1. Data Preparation: Collect the original CT data, divide the label data set and the label data set, and select the label data set;
[0051] Step 2, the feature extraction: According to the characteristics of CT data, the pre-training network based on self-supervision learning is built, and the non-label data set is input to the pre-training network for iterative training, select the optimal pre-training model;
[0052] Step 3: Segmentation Model Generation: According to the split task, the division network is built, and the training self-supervised pre-training model will be loaded into the split network, and then the label data set input split network will be used to select the optimal model, and finally Model segmentation performance tests and evaluates.
[0053] In step 1, the present inven...
Embodiment 2
[0058] This embodiment is further optimized on the basis of Example 1, and the specific is:
[0059] In step 2, the feature extraction is primarily based on the pre-learning task, and the pre-training model is generated, and the pre-learning task is mainly used as a label by using the features you bring to the data itself, in this way, network design and training. The CT images generated in the same scan are continuous, so there is a characteristic similarity between any two CT images of the same scan. Considering that the CT scanner comes with a plurality of information labels, including the coordinate information of the CT image, indicating the scanning position of the CT image, such as figure 2 As shown in which it can be seen that there is a "relative distance" between the two CT images according to the coordinate z directed toward the head. Therefore, the depth nerve can extract the image characteristics of the CT by learning such a "relative distance" between the CT images. ...
Embodiment 3
[0071] This embodiment has been further optimized on the basis of Example 1 or 2, and the specific is:
[0072] In step 3, the pre-training model is obtained after the feature extraction is completed, and the pre-training model carries shallow image features with CT data. This step will be based on the pre-training model, carry a small amount of data input split network containing the split tag. Training can get a split model, the specific steps are as follows:
[0073] Step 3A, the split network is constructed: the split model network is designed based on the depth neural network, including the encoder, random multi-scale module, and decoder three parts; wherein the split model network encoder unit and the encoder of the pre-learning network Consistent, the main structure is as follows Figure 4 Down:
[0074] Step 3B, the pre-learning model is loaded: because the encoder structure of the pre-learning network is consistent because the encoder structure of the pre-learning network ...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


