Unlock instant, AI-driven research and patent intelligence for your innovation.

Microscope cervical cancer TCT image cell robust detection method

A detection method and technology for cervical cancer, applied in image analysis, image enhancement, image data processing and other directions, can solve the problems of large difference of input feature values, weak network generalization ability, large difference value, etc., to improve accuracy, Enhance domain generalization ability and reduce the effect of difference value

Pending Publication Date: 2022-05-06
杭州迪英加科技有限公司
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the problems that the existing robust detection method of pathological image cells under the microscope still has weak generalization ability of the network and large difference values ​​caused by large differences in input feature values, and proposes A Robust Cell Detection Method for TCT Images of Cervical Cancer under Microscope

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Microscope cervical cancer TCT image cell robust detection method
  • Microscope cervical cancer TCT image cell robust detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0022] refer to figure 1 , a kind of microscope cervical cancer TCT image cell robust detection method, comprises the following steps:

[0023]S1: Build the basic network structure: use the encoding network to extract multi-scale semantic features, and use the spatial pyramid network (RPN) to obtain the anchor box of the target detection candidate area frame, and retain N RoIs after the non-maximum suppression operation, where N is super Parameters can be independently defined according to requirements. Due to the different step sizes in the process of convolution feature extraction, the RoIAlign operation is performed on the step sizes corresponding to the feature maps of four different scales. The RoIAlign refers to the traversal For each candidate area, keep the floating-point boundary without quantization, and divide the candidate area into k×k units. The boundary of each unit is not quantized. Calculate and fix four coordinate positions in each unit, using double lines C...

Embodiment 2

[0028] refer to figure 1 , a kind of microscope cervical cancer TCT image cell robust detection method, comprises the following steps:

[0029] S1: Build the basic network structure: use the encoding network to extract multi-scale semantic features, and use the spatial pyramid network (RPN) to obtain the anchor box of the target detection candidate area frame, and retain N RoIs after the non-maximum suppression operation, where N is super Parameters can be independently defined according to requirements. Due to the different step sizes in the process of convolution feature extraction, the RoIAlign operation is performed on the step sizes corresponding to the feature maps of four different scales. The RoIAlign refers to the traversal For each candidate area, keep the floating-point boundary without quantization, and divide the candidate area into k×k units. The boundary of each unit is not quantized. Calculate and fix four coordinate positions in each unit, using double lines ...

Embodiment 3

[0034] refer to figure 1 , a kind of microscope cervical cancer TCT image cell robust detection method, comprises the following steps:

[0035] S1: Build the basic network structure: use the encoding network to extract multi-scale semantic features, and use the spatial pyramid network (RPN) to obtain the anchor box of the target detection candidate area frame, and retain N RoIs after the non-maximum suppression operation, where N is super Parameters can be independently defined according to requirements. Due to the different step sizes in the process of convolution feature extraction, the RoIAlign operation is performed on the step sizes corresponding to the feature maps of four different scales. The RoIAlign refers to the traversal For each candidate area, keep the floating-point boundary without quantization, and divide the candidate area into k×k units. The boundary of each unit is not quantized. Calculate and fix four coordinate positions in each unit, using double lines ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of a cell robust detection method of a pathological image under a microscope, in particular to a cell robust detection method of a cervical cancer TCT image under the microscope. In order to solve the problem of relatively large difference value caused by relatively weak generalization ability of a network and relatively large difference of input characteristic values in the existing pathological image cell robust detection method under the microscope, the invention provides the following scheme: the method comprises the following steps: S1, constructing a basic network structure, S2, constructing a self-adaptive standardization module, and S3, constructing an adaptive standardization module; the objective of the invention is to adaptively learn statistical information with high discreteness, such as feature mean value, standard deviation, zoom factor and the like, through the auto-encoder, improve the accuracy of input feature values, reduce generated difference values, effectively improve the cell detection precision of the microscope pathological image, and improve the detection accuracy of the microscope pathological image. Meanwhile, the self-adaptive standardization module is the same as a traditional batch processing standardization use mode and can be compatible with a deep convolutional neural network, and the domain generalization ability of the model is enhanced.

Description

technical field [0001] The invention relates to the technical field of a method for robust detection of cells in pathological images under a microscope, in particular to a method for robust detection of cells in TCT images of cervical cancer under a microscope. Background technique [0002] In the method of cell detection in digital pathology images, object detection network based on deep learning is widely used. The target detection network usually encodes the multi-scale features and their context information effectively through the convolutional neural network, and then uses the region generation network to obtain the candidate region frame for the encoded deep features, and finally uses non-maximum value suppression, full-connection classification and other technologies The method returns the category and position of the candidate area frame, so as to achieve the effect of detecting and classifying the cells. Object detection networks require rich context information an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00G06N3/04G06N3/08
CPCG06T7/0012G06N3/08G06T2207/20004G06T2207/30004G06N3/045
Inventor 亢宇鑫崔灿崔磊杨林
Owner 杭州迪英加科技有限公司
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More