Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network

A convolutional neural network, low-rank representation technology, applied in neural learning methods, biological neural network models, neural architectures, etc., can solve problems such as the inability to fully utilize the features of the image, the difficulty of the model, and the difficulty in the implementation of the method. Guaranteed rationality and reversibility, prominent goals, and rich details and information

Active Publication Date: 2022-02-01
SICHUAN UNIV
View PDF9 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to obtain better fusion performance, the method based on multi-scale transformation tends to design more and more complex fusion strategies, which leads to difficulties in the implementation of the method and the designed rules cannot ensure that the characteristics of the decomposed image are fully utilized. The method based on neural network will design more complex network structure and loss function, which will make it very difficult to design a model suitable for fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network
  • Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network
  • Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. The invention provides a fusion method of infrared and visible light images combining latent low-rank representation and convolutional neural network, comprising the following steps:

[0050] Step 1) Select the infrared and visible light fusion data set, and expand the data set as the training set of the neural network.

[0051] The infrared and visible fusion dataset uses TNO, its link is https: / / figshare.com / articles / TNO_Image_Fusion_Dataset / 1008029. According to the image quality and the frequency of appearance in the paper, 28 pairs of infrared and visible light images were selected as the original images. Since the model of the present invention has a neural network structure, a large amount of data is required for training, so data expansion i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of computer information, and discloses an infrared and visible light image fusion method combining potential low-rank representation and a convolutional neural network, and the method comprises the steps: carrying out the preprocessing of a visible light image and an infrared image, decomposing the image into a significant part and a low-rank part through the potential low-rank representation, and fusing the salient part and the low-rank part by using two full convolution models, and finally adding the fused salient part and the fused low-rank part to obtain a final fused image. The fusion result obtained through fusion is rich in detail information, and compared with an original low-illumination image, the scene target is clearer, and the contrast ratio of the image is improved.

Description

technical field [0001] The invention relates to the fields of computer, image fusion and intelligent monitoring, in particular to an infrared and visible light image fusion method combined with latent low-rank representation and convolutional neural network. Background technique [0002] Image fusion is to extract the most meaningful parts from the images acquired by different sensors, and then combine them into an image. The fused image contains more scene description information, which is more convenient for subsequent applications. Infrared and visible light image fusion has unique advantages. The infrared sensor captures the thermal radiation of the object on the image, so the infrared image is not easily affected by the complex environment and can better distinguish the object from the background, but this also results in insufficient detail information and low contrast. The visible light sensor captures the reflected light of the environment, and the imaging is easily...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50G06T5/00G06T3/40G06N3/04G06N3/08
CPCG06T5/50G06N3/084G06T5/007G06T3/4038G06T2207/10048G06T2207/20221G06T2207/20081G06N3/045Y02T10/40
Inventor 朱敏杨勇明章强高承睿程俊龙李长林
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products