Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for image fusion based on representation learning

An image fusion, source image technology, applied in the field of image fusion based on representation learning, can solve the problems of incomplete mathematical theory research of image fusion model, and achieve the effect of speeding up the solution speed and improving the quality.

Inactive Publication Date: 2015-08-19
ZHOUKOU NORMAL UNIV +1
View PDF1 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] So far, the research on the image fusion model and its mathematical theory is extremely incomplete, and there are many problems that need to be solved urgently. For example, is it possible to establish a unified mathematical theoretical framework for a common problem of actual image fusion? How to use image fusion technology to reconstruct real-time and accurate image information is also a major issue

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for image fusion based on representation learning
  • Method for image fusion based on representation learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] In order to make the objects and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0019] Such as figure 1 As shown, the embodiment of the present invention provides an image fusion method based on representation learning, including the following steps:

[0020] S1. Acquire multi-source images, learn the features of multi-source images through the deep neural network composed of sparse autoencoder, deep belief network composed of Boltzmann machine and deep convolutional neural network, and use These automatically learned features complete the fusion of multi-source images and establish an image fusion model;

[0021] S2. Study the convex optimization problem of the image fusion model, and use the unsupervised pre-training in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for image fusion based on representation learning, which comprises the steps of acquiring a multi-source image, learning features of the multi-source image through a learning framework of a deep neural network formed by a sparse adaptive encoder, a deep confidence network formed by a Boltzmann machine and a deep convolutional neural network, completing fusion of the multi-source image by using the automatically learned features, and establishing an image fusion model; studying a convex optimization problem of the image fusion model, and carrying out initialization on the networks by using unsupervised pre-training in deep learning, thereby enabling the networks to find an optimal solution quickly in the training process; and establishing a deep learning network for cooperative training according to the features of the multi-source image through two or more deep learning networks, thereby realizing an image fusion technology of representation learning. The method disclosed by the invention studies feature-level fusion of the image by using artificial intelligence and a deep learning based feature representation method. Compared with a traditional pixel-level fusion method, the method disclosed by the invention can better understand image information, and thus further improves the quality of image fusion.

Description

technical field [0001] The invention relates to the technical field of image fusion, in particular to an image fusion method based on representation learning. Background technique [0002] Due to the advancement of society and technology, the imaging detection band has developed from a single visible light-near infrared to multiple bands such as extreme ultraviolet / middle-far infrared / terahertz, forming various modern imaging detection and image acquisition technologies with rich bands. These technologies enable people to easily obtain a variety of image information. In various industries, people use various sensors to obtain a large amount of image information with various representations from various angles, that is, multi-modal image information. On the one hand, image information is acquired by different sensors, and the image information acquired by different sensors of the same object has different representations. On the other hand, even for image information of the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00
CPCG06T7/344G06T2207/20221
Inventor 陈占伟黄伟陈松岭张少辉陈立勇
Owner ZHOUKOU NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products