An image fusion method based on convolution neural network

A convolutional neural network, image fusion technology, applied in instruments, character and pattern recognition, computer parts and other directions, can solve problems such as missing relevant information

Active Publication Date: 2019-01-25
KUNMING UNIV OF SCI & TECH
View PDF4 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in these two types of methods, the filter banks are artificially set, so a lot of relevant information will be missed and redundant information will be introduced during the filtering process.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image fusion method based on convolution neural network
  • An image fusion method based on convolution neural network
  • An image fusion method based on convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0042] Embodiment 1: adopt the method in the above-mentioned content of the invention to such as Figure 4 In (a) and (b), the two multi-focus images to be fused are fused. We choose the following Figure 5 The 7 high-definition pictures shown are cropped and blurred with MATLAB to obtain a set of 70 training pictures.

[0043] Convert 70 training data pictures into .h5 files and input them into the training model. The training set contains two sets of data, one is real clear pictures, and the other is ten blurred pictures based on clear pictures. When the real picture is the marked picture, it is used for the calculation of the error. After several iterations, two sets of weights of the model are obtained and saved.

[0044] Initialize the weights of the VGG16 model (read the two sets of weights generated by training), and after the fused image passes through the network, five feature maps (corresponding to the five layers of VGG16) will be obtained. Only the fusion rule is...

Embodiment 2

[0049] Embodiment 2: adopt the method in the summary of the invention and the specific value pair in embodiment 1 such as Image 6 In (a) and (b), two medical images to be fused are fused, one is a CT image, and the other is an MRI image. The training data set is as follows: Figure 7 As shown, the final fusion image is Image 6 Shown in c in, its result and the contrast of prior art are shown in table 2.

[0050] Table 2 Comparison between this method and the prior art on the fusion results of medical pictures

[0051] method

[0052] It can be seen from Table 2 that this method has obvious advantages over other conventional methods in the fusion of medical pictures, especially in the Qcv index, which shows that analyzing pictures through neural networks is more in line with human understanding and cognition.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an image fusion method based on a convolution neural network, belonging to the field of information fusion and image processing. The invention obtains a fusion picture by training and using a convolution neural network. Convolution neural network is trained in advance by selecting training set according to the pictures to be fused, and the whole training process belongs tosupervised training. In the process of training, it involves the analysis and synthesis of pictures. Then the analysis of the trained two-group model weight feedforward network and the synthesis of the feedback network are used for the fusion model of the deep neural network. The fusion rules in the training and fusion process adopt the sigmoid function fusion rules based on depth learning. The invention avoids the omission of redundant information and related information in the fusion process.

Description

technical field [0001] The invention relates to an image fusion method based on a convolutional neural network, belonging to the field of image fusion. Background technique [0002] Image fusion is the synthesis of images or image sequence information about a specific scene acquired by two or more sensors at the same time or at different times to generate new information about the scene at that time. [0003] With the development of multi-source image fusion technology, its application in military and civilian fields is more in-depth, and it is of great significance to the construction of economy and national defense. Multi-source images are roughly divided into multi-sensor images, remote sensing multi-source images, multi-focus images, and time-series images. Both multi-focus images and time-series images are obtained by using different imaging methods of the same sensor or without imaging time. In particular, multi-focus images are obtained by the same sensor using diff...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
CPCG06F18/214G06F18/251
Inventor 王蒙刘兴旺梁敏
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products