Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An image registration method and apparatus based on depth learning

An image registration and deep learning technology, applied in the field of image processing, can solve the problems of loss of spatial information, labeling error, easy to lose spatial position information, etc., and achieve high robustness and high effect of image registration operation.

Active Publication Date: 2019-02-15
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF5 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, when processing the same instanced feature, the spatial invariance is guaranteed by using the downsampling operation, ignoring the spatial hierarchy and orientation information of the image, making it insensitive to the rotation and angle information of the image, most of the existing registration models use The convolution operation is easy to lose the direction feature information, thus affecting the accuracy of the registration model
[0004] The following problems exist in the commonly used methods at present: 1. The existing methods of deep learning technology to deal with medical image registration problems usually use convolution operations and pooling operations to obtain image features, and when dealing with the invariance of image features Due to the difference in position and angle transformation between the features of the registration image and the reference image, the traditional convolutional neural network cannot effectively detect its specific direction information. This lack of spatial transformation information will eventually affect the performance of the registration model.
But in fact, the pooling operation makes it lose spatial information. This situation has little effect on the classification task, but has a great impact on the registration task.
When using the convolutional neural network for registration, only by using methods such as data enhancement, the convolutional network can better identify the position change information such as rotation, but in essence, it is easy to lose the spatial level information and direction information of the image. , making it insensitive to the recognition of spatial transformation information of the same semantic content of the registered image, thus affecting the registration accuracy
2. The above-mentioned use of deep learning technology for medical image registration adopts a supervised learning method. Supervised image registration needs to manually mark the registration image area of ​​​​the training samples. During the training process, the prediction information is compared with the label information for feedback. Network loss, the process of manually labeling samples is time-consuming and labor-intensive, and the labeling process for large-sample datasets takes too long to achieve
In addition, manual sample labeling is highly dependent on the professional knowledge of professional doctors. Large quantities of manual sample labeling are prone to labeling errors, and registration results are easily affected by labeling quality.
3. The pooling operation adopted by the traditional convolutional neural network is easy to lose the spatial position information, and to achieve high-precision results often requires a large number of training samples in the training set, so that the neural network can fully learn the shapes of various targets. Pose features, so traditional deep learning registration models often require a large number of training samples
For medical image processing tasks, it is difficult to obtain large batches of training samples with relatively uniform specifications due to reasons such as patient privacy and different diagnostic instruments. Therefore, traditional deep learning registration tasks require a large training set due to the network structure, but it is difficult to fully obtain it. Training samples, which affect the registration accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image registration method and apparatus based on depth learning
  • An image registration method and apparatus based on depth learning
  • An image registration method and apparatus based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] In order to enable those skilled in the art to better understand the solutions of the present invention, the following will clearly and completely describe the technical solutions in the embodiments of the present invention in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only It is an embodiment of a part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present invention.

[0049] The terms "first", "second", "third", "fourth", etc. (if any) in the description and claims of the present invention and the above drawings are used to distinguish similar objects, and not necessarily Used to describe a specific sequence or sequence. It is to be understood that the terms so used are interchangeable under app...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an image registration method and a device based on depth learning. The image registration model is constructed by utilizing a capsule network. The scalar representation and pooling mechanism in the traditional depth learning convolution network are replaced by a vector feature representation and routing mechanism, and capsules of different levels are connected step by step to carry out feature combination. An image fusion network based on capsule vector is constructed, and the fusion image with the same dimension size as the reference image is output as the registrationoutput. The loss function based on similarity measure between images is constructed to optimize the unsupervised learning registration network by feedback training network parameters. The image registration operation with high accuracy and high robustness is realized.

Description

technical field [0001] The present invention relates to the field of image processing, in particular to an image registration method and device based on deep learning. Background technique [0002] Medical image registration refers to the process of matching and superimposing two or more medical images acquired at different times, different imaging devices or under different conditions. At present, information technology represented by deep learning and high-end medical imaging technology have continuously made major breakthroughs, and registration using deep learning technology has become a new hot spot in the field of medical image registration. [0003] An existing medical image registration method based on convolutional neural network (Chinese patent application CN201711017916.8). The method introduces a tensor column into a weight matrix of a fully connected layer of a convolutional neural network to obtain a tensor convolutional neural network, thereby obtaining at le...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/33
CPCG06T2207/10088G06T2207/10092G06T2207/10116G06T2207/20081G06T2207/20084G06T7/337
Inventor 王书强王翔宇王鸿飞
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products