Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multimodal collaborative image segmentation system for esophageal cancer lesions based on self-sampling similarity

A multi-modal, esophageal cancer technology, applied in the field of medical image intelligent processing, can solve the problems that researchers only consider, and achieve the effect of improving efficiency and precise segmentation

Active Publication Date: 2022-04-12
FUDAN UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Although the above methods have achieved certain results, the researchers only considered the use of white light images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multimodal collaborative image segmentation system for esophageal cancer lesions based on self-sampling similarity
  • Multimodal collaborative image segmentation system for esophageal cancer lesions based on self-sampling similarity
  • Multimodal collaborative image segmentation system for esophageal cancer lesions based on self-sampling similarity

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The embodiments of the present invention will be described in detail below, but the protection scope of the present invention is not limited to the examples.

[0035] use figure 1 With the network structure in , a multimodal neural network is trained with 268 white-light NBI image pairs to obtain an automatic lesion region segmentation model.

[0036] The specific steps are:

[0037] (1) When training, scale the image to 500×500. Set the initial learning rate to 0.0001, the decay rate to 0.9, and decay once every two cycles. Minimize the loss function using mini-batch stochastic gradient descent. The batch size is set to 4. Update all parameters in the network, minimize the loss function of the network, and train until convergence;

[0038] (2) When testing, the image I Adjust the size to 500×500, input it into the trained model, and the model outputs white light images and NBI lesion area segmentation results;

[0039] Figure 4 Showing the segmentation results...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of medical image intelligent processing, and specifically relates to a self-sampling similarity-based multimodal collaborative esophageal cancer lesion image segmentation system. For the diagnosis of early esophageal cancer, first construct a paired white light image and NBI data set; the segmentation system includes a feature extraction encoder, a self-sampling similar feature separation module, and a feature fusion decoder; with white light images and NBI as multimodal input, The features of the two modalities are extracted by two encoders; for each modality, the self-sampling similar feature separation module effectively distinguishes the features of the lesion area from the normal area; the decoder completes multi-modal feature fusion in the feature domain, Finally, the lesion segmentation results of the two modality images are output. Experimental results show that the present invention can reasonably integrate features of different modalities, significantly distinguish diseased areas from normal areas, realize precise segmentation of diseased areas, and improve the efficiency of clinical diagnosis.

Description

technical field [0001] The invention belongs to the technical field of medical image intelligent processing, in particular to an esophageal cancer lesion image segmentation system, more specifically, to a self-sampling similarity-based multi-modal cooperative esophageal cancer lesion image segmentation system. Background technique [0002] Esophageal cancer is a common upper gastrointestinal cancer in China and developing countries, ranking sixth in cancer deaths in the world [1] . Studies have shown that the 5-year survival rate of patients with advanced esophageal cancer is less than 20%, but it can exceed 85% in the early stage. Advanced unresectable tumors are the main cause of poor prognosis in most patients with esophageal cancer. Therefore, accurate diagnosis of early esophageal cancer plays an important role in reducing mortality. [0003] Endoscopic screening significantly reduces mortality from esophageal cancer. White-light endoscopic imaging (WH) is the most ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/11G06T7/00G06N3/04G06N3/08
CPCG06T7/11G06T7/0012G06N3/08G06T2207/10068G06T2207/20081G06T2207/20084G06T2207/30096G06N3/045
Inventor 钟芸诗颜波蔡世伦谭伟敏林青
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products