Infrared and visible light image fusion method based on salient objects

A technology of infrared image and image fusion, applied in image enhancement, image data processing, instruments, etc.

Active Publication Date: 2015-06-10
THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
View PDF3 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing system technologies are limited to simple superposition or trade-of...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Infrared and visible light image fusion method based on salient objects
  • Infrared and visible light image fusion method based on salient objects
  • Infrared and visible light image fusion method based on salient objects

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0159] The realization process of the present invention is illustrated by a specific example.

[0160] figure 2 is the infrared image of a certain scene, image 3 is a visible light image of a scene.

[0161] As described in step 1, first establish the nonlinear scale space representation of infrared images and visible light images, and set the edge threshold λ to 0.5.

[0162] As described in step 2, calculate the brightness, color and direction visual feature maps of the infrared image and the visible light image, and the brightness, color and direction saliency map, and calculate the visual attention saliency map of the infrared image and the visible light image.

[0163] According to step 3, the number of salient target areas in the infrared image is calculated to be 5, and the number of salient target areas in the visible light image is 4, among which the number of salient target areas in both the infrared image and the visible light image is 3, and the salient target ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an infrared and visible light image fusion method based on salient objects. The infrared and visible light image fusion method based on the salient objects includes following steps: building nonlinear scale space representation for an infrared image and a visible light image which respectively comprise a plurality of objects in a given scene; using a visual attention computational model to compute visual attention salient maps of the infrared image and the visible light image based on the nonlinear scale space representation of the images; using a return inhibition mechanism to select salient object areas from the infrared image and the visible light image based on the visual attention salient maps of the infrared image and the visible light image, and computing all salient object areas in the whole scene; performing rectification operation on the infrared image and the visible light image, using a pixel level fusion algorithm to perform fusion treatment on the salient object areas, and using a feature level fusion algorithm to perform fusion treatment on non-salient object areas; generating a fusion image of the infrared image and the visible light image of the whole scene by synthesizing results.

Description

technical field [0001] The invention relates to the technical field of multi-source image fusion, in particular to a method for multi-level fusion of infrared and visible light images based on salient objects. Background technique [0002] Image fusion (Image Fusion) refers to the comprehensive analysis technology of multi-resolution or multi-media image data through spatial registration and complementary image information to generate new images. Compared with single-sensor images, fusion images can maximize the use of information from various source images, improve resolution and clarity, increase image target perception sensitivity, perception distance and accuracy, anti-interference ability, etc., thereby reducing the difference in target perception. Completeness and uncertainty, improve the accuracy of the target four parts and the ability to explain the scene. The general process of image fusion is shown in the figure. First, perform some preprocessing operations on m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50
Inventor 邵静秦晅卢旻昊
Owner THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products