Target region extraction method for multi-modal medical image based on convolutional neural network

A technology of convolutional neural network and target area, which is applied in the field of target area extraction of multimodal medical images, can solve the problems of underutilization, high manpower consumption, underutilization of MRI differences of different parameters, etc., to overcome time-consuming Efforts to increase effort, improve accuracy, and overcome problems of subjective variability

Active Publication Date: 2020-06-05
SUZHOU INST OF BIOMEDICAL ENG & TECH CHINESE ACADEMY OF SCI
View PDF2 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Multi-parameter MR image diagnosis is one of the early diagnosis methods of prostate cancer. The precise segmentation of prostate cancer lesion area on multi-parameter MR images is of great significance for accurate grading of prostate cancer and high-quality MR image-guided biopsy puncture. Manual delineation There are two main problems in the detection of prostate cancer lesions on multi-parameter MR images: one is the existence of subjective differences, that is, the same multi-parameter MR image case, the observation results of experienced doctors and inexperienced doctors are different; It is costly. The automatic and accurate segmentation of prostate cancer lesions can effectively improve the consistency of the segmentation results of prostate cancer lesions and improve the work efficiency of doctors
Yohannes et al. [4] In order to make full use of the existing network model, the multi-parameter MR image is put into the three channels of the RGB image respectively, and the information of the multi-parameter MR image is fused at the input image level. Although this method can fuse the information in the multi-parameter MRI image to a certain extent , but did not make full use of the connection between the high-level features of MR images with different parameters. Yang Xin et al. [5] Two parallel convolutional networks are used to extract ADC and T2W image features, and the gap between the ADC feature map and the T2W feature map is used as a constraint to guide the convolutional network to extract effective features on MR images with different parameters during network training. Although this method can effectively use the consistency between the advanced features of MR images with different parameters, it does not make full use of the differences between the MRI features of different parameters.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target region extraction method for multi-modal medical image based on convolutional neural network
  • Target region extraction method for multi-modal medical image based on convolutional neural network
  • Target region extraction method for multi-modal medical image based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The present invention will be further described in detail below in conjunction with the embodiments, so that those skilled in the art can implement it with reference to the description.

[0044] It should be understood that terms such as "having", "comprising" and "including" used herein do not exclude the presence or addition of one or more other elements or combinations thereof.

[0045] A method for extracting a target region of a multimodal medical image based on a convolutional neural network in this embodiment includes the following steps:

[0046] 1) Construct a mask region convolutional neural network for target region extraction in multimodal medical images:

[0047] refer to figure 1 , the mask area convolutional neural network includes a multi-modal medical image feature extraction network, a target region proposal network, and a head network, and the multi-modal medical image feature extraction network extracts multi-level fusion feature maps of multi-modal...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a target region extraction method for a multi-modal medical image based on a convolutional neural network. The method comprises the following steps: 1) constructing a mask region convolutional neural network for target region extraction in a multi-modal medical image; 2) training the constructed mask region convolutional neural network; and 3) inputting a to-be-processed multi-modal medical image into the trained mask region convolutional neural network to perform target region extraction. According to the invention, the automatic and accurate segmentation of a target area in the multi-modal medical image can be realized, the subjective difference problem and the time-consuming and labor-consuming defects of the manual segmentation of the target area can be overcome, and the accuracy of the extraction of the target area in the multi-modal medical image can be improved; according to the invention, the feature image extraction of the multi-modal medical image canbe realized through a plurality of parallel SE-Resnet, and the feature extraction efficiency of the medical image and the information fusion efficiency of the multi-modal medical image can be improvedby integrating an extrusion excitation block into a feature extraction network.

Description

technical field [0001] The invention relates to the field of medical image processing, in particular to a method for extracting a target region of a multimodal medical image based on a convolutional neural network. Background technique [0002] Prostate cancer is one of the most common types of cancer in middle-aged and elderly men, ranking fifth among the types of cancer that cause death in men. Early detection and timely treatment of prostate cancer can effectively improve the five-year survival rate of prostate cancer patients. Multi-parameter MR image diagnosis is one of the early diagnosis methods of prostate cancer. The precise segmentation of prostate cancer lesion area on multi-parameter MR images is of great significance for accurate grading of prostate cancer and high-quality MR image-guided biopsy puncture. Manual delineation There are two main problems in the detection of prostate cancer lesions on multi-parameter MR images: one is the existence of subjective di...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06K9/62
CPCG06T7/11G06T2207/20081G06T2207/20084G06T2207/30081G06V2201/03G06F18/253
Inventor 戴亚康胡冀苏钱旭升周志勇黄毅鹏赵文露马麒沈钧康
Owner SUZHOU INST OF BIOMEDICAL ENG & TECH CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products