Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal breast cancer classification training method and system based on graph attention network

A training method and attention technology, which is applied in the field of disease classification and deep learning, can solve the problems that modal complementarity is not fully utilized, breast cancer is difficult to meet the requirements of clinical diagnosis, and achieve the effect of improving classification performance

Pending Publication Date: 2022-08-05
YANGZHOU UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods still have some shortcomings: (1) A patient may contain multiple pathological images of various parts of breast cancer, and there are some interactions between these images
(2) Most of the existing research uses pathological images as the input of convolutional neural network, but it is difficult to meet the requirements of clinical diagnosis for the classification of benign and malignant breast cancer by considering only single-modal image data
(3) There are correlations between the data of different modalities, and the simple fusion method will not give full play to the complementarity between the modalities

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal breast cancer classification training method and system based on graph attention network
  • Multi-modal breast cancer classification training method and system based on graph attention network
  • Multi-modal breast cancer classification training method and system based on graph attention network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0061] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

[0062] combine figure 1 A schematic flowchart of the first embodiment of the present invention, the present invention proposes a breast cancer classification training method based on a graph attention network, which mainly includes the following steps:

[0063] Step 1, extracting representative pathological features from the patient's electronic medical record EMR, digitizing each feature, and pr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal breast cancer classification training method and system based on a graph attention network, and the method comprises the following steps: firstly, carrying out the pathological feature extraction and text processing of an electronic medical record, forming a medical record text, and obtaining a text feature through a pre-training model; meanwhile, performing high-order feature extraction on the patient pathology image set by using a graph attention network; thirdly, fusing the obtained image, text and pathological features through a multi-modal adaptive gating unit to obtain multi-modal fusion features of the patient; and finally, inputting the fused multi-modal features into a multi-layer perceptron for classification prediction, and defining a cross entropy loss function training model. According to the method provided by the invention, the features of the image, the text and the pathology are fused to classify the breast cancer, the performance of the proposed network structure is obviously superior to that of a single-mode method, and the purpose of improving the breast cancer classification accuracy is achieved.

Description

technical field [0001] The invention belongs to the fields of deep learning and disease classification, and in particular relates to a multimodal breast cancer classification training method and system based on a graph attention network. Background technique [0002] Breast cancer is one of the most serious diseases threatening human life and health, and it is a medical and health problem of common concern all over the world. According to the data released by the International Center for Research on Cancer (IARC) under the World Health Organization (WHO) in 2020, the number of new cases of breast cancer reached 2.26 million, exceeding the 2.2 million cases of lung cancer. Breast cancer replaced lung cancer and became the first in the world. big cancer. Breast cancer can occur in both men and women, and more than 98% of breast cancer patients are women. The incidence of breast cancer ranks first in the world, and its incidence rate is increasing year by year and the trend o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G16H50/20G16H10/60G06K9/62G06T7/00G06V10/82G06F40/279G06N3/04
CPCG16H50/20G16H10/60G06T7/0012G06V10/82G06F40/279G06T2207/20084G06T2207/30068G06T2207/30096G06V2201/03G06N3/047G06N3/048G06N3/045G06F18/2415G06F18/243G06F18/253
Inventor 章永龙宋明宇李斌
Owner YANGZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products