A method for grading lens opacity based on ocular b-ultrasound images
A grading method and lens technology, applied in the field of medical image processing, can solve problems such as grading difficult lens turbidity, achieve accurate intelligent grading, reliable lens characteristics and turbidity identification results, and improve accuracy
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0052] like figure 1 As shown, a method for grading lens opacity based on an eye B-ultrasound image, comprising the following steps:
[0053] S1. Obtain the original eye B-ultrasound image and preprocess it;
[0054] S2. Input the preprocessed eye B-ultrasound image into the trained target detection network YOLOv3 to obtain an eyeball image;
[0055] S3. Input the eyeball image into the trained convolutional neural network DenseNet161, convolutional neural network ResNet152 and convolutional neural network ResNet101 respectively, and obtain the corresponding lens turbidity prediction result;
[0056] S4. Perform a majority vote on the three lens turbidity prediction results to obtain the final lens turbidity grading result.
Embodiment 2
[0058] The method for preprocessing the original eye B-ultrasound image in step S1 of the above embodiment is specifically:
[0059] Convert the original eye ultrasound image in DICOM format to an eye ultrasound image in PNG format with a size of 720×576.
Embodiment 3
[0061] The method for training the target detection network YOLOv3 in step S2 in the above-mentioned embodiment 1 is specifically:
[0062] A1. Construct an original eye B-ultrasound image dataset, and preprocess each original eye B-ultrasound image;
[0063] A2. Divide the preprocessed eye B-ultrasound image dataset into a target detection dataset and a feature extraction dataset;
[0064] A3. Manually mark the eyeball position in the B-scan image of the eye in the target detection data set, and normalize the eyeball coordinates of the marked eyeball position;
[0065] A4. Adjust the size of the eye B-ultrasound image marked with eyeball coordinates to 416×416, and input it into the target detection network YOLOv3 to extract the eyeball image, and complete the training of the target detection network YOLOv3;
[0066] Among them, the target detection network YOLOv3 uses the extracted three feature maps for eyeball detection, and the output size is 13×13×(a+b+c), 26×26×(a+b+c)...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


