Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image fusion method based on brightness self-adaption and significance detection

An image fusion and adaptive technology, applied in the field of image fusion, can solve problems such as poor fusion effect and unsatisfactory image details, and achieve the effect of improving the fusion effect

Active Publication Date: 2019-11-22
GUANGDONG UNIV OF TECH
View PDF8 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention provides an image fusion based on brightness self-adaptation and saliency detection in order to overcome the disadvantages of unsatisfactory fusion image features and image detail processing and poor fusion effect of infrared images and visible light images in complex brightness environments in the prior art The method is used to improve the fusion effect of images under complex brightness

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fusion method based on brightness self-adaption and significance detection
  • Image fusion method based on brightness self-adaption and significance detection
  • Image fusion method based on brightness self-adaption and significance detection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0052] Based on the influence of brightness information on scene recognition, the present invention defines image brightness levels, classifies images, and saves computing time when image fusion is not required; based on the relationship between image characteristics and ambient brightness, the brightness weight function is designed to optimize the fusion of the base layer , and at the same time use the saliency feature map of the image to preserve the overall contrast information of the image base layer, and optimize the fusion of the image detail layer by the least square method.

[0053] Such as figure 1 As shown, an image fusion method based on brightness adaptation and saliency detection includes the following steps:

[0054] S1: collecting infrared images and visible light images and performing image preprocessing respectively, the image preprocessing process includes: image grayscale processing, image enhancement processing, filtering and denoising processing;

[0055]...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image fusion method based on brightness self-adaption and significance detection. The method comprises the following steps: collecting an infrared image and a visible lightimage, and respectively carrying out image preprocessing and image registration; determining a brightness level by using histograms of the infrared image and the visible light image after graying processing, counting image pixel significance values and calculating a brightness weight; carrying out image decomposition by utilizing a rolling guided filtering method; defining pixel saliency values, and combining the plurality of pixel saliency values into a saliency map; carrying out fusion of a base layer image and fusion of a detail layer image; and superposing the base layer fusion image and the detail layer fusion image to obtain a final image fusion result. According to the method, the image brightness is graded, the to-be-fused image is decomposed based on rolling guided filtering, global targets and details of the image are processed separately, the brightness weight and the saliency map are used for fusing the base layer image, the least square method is used for fusing the detaillayer image, and the image fusion effect is improved.

Description

technical field [0001] The present invention relates to the field of image fusion, more specifically, to an image fusion method based on brightness self-adaptation and saliency detection. Background technique [0002] With the rise of artificial intelligence, visual image processing has a wide range of applications in robotics. It can perform scene recognition and positioning through matching and distance calculations, and then use the processing results of image data for robot control. However, due to the imaging principle of visible light cameras, vision sensors often have certain application limitations: (1) cannot adapt to complex environmental applications, such as strong exposure scenes or complex backgrounds, and are usually only used in simple and bright scenes; (2) traditional image The processing cannot judge whether the change in the gray value of the image is due to the change of the scene or the difference caused by the change of light in the same scene, so it c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/33
CPCG06T7/33G06T2207/20221G06T2207/10048
Inventor 蔡佳曾碧
Owner GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products