Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Fusion Method of Infrared and Visible Light Images Based on Salient Objects

An infrared image and visible light technology, applied in image enhancement, image data processing, instruments, etc., to achieve the effect of taking into account the quality and efficiency of fusion, reducing the amount of data, and improving the efficiency of fusion

Active Publication Date: 2018-10-12
THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing system technologies are limited to simple superposition or trade-off operations of image pixels of different phases or different color channels of the same scene

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Fusion Method of Infrared and Visible Light Images Based on Salient Objects
  • A Fusion Method of Infrared and Visible Light Images Based on Salient Objects
  • A Fusion Method of Infrared and Visible Light Images Based on Salient Objects

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0161] The realization process of the present invention is illustrated by a specific example.

[0162] figure 2 is the infrared image of a certain scene, image 3 is a visible light image of a scene.

[0163] As described in step 1, first establish the nonlinear scale space representation of infrared images and visible light images, and set the edge threshold λ to 0.5.

[0164] As described in step 2, calculate the brightness, color and direction visual feature maps of the infrared image and the visible light image, and the brightness, color and direction saliency map, and calculate the visual attention saliency map of the infrared image and the visible light image.

[0165] According to step 3, the number of salient target areas in the infrared image is calculated to be 5, and the number of salient target areas in the visible light image is 4, among which the number of salient target areas in both the infrared image and the visible light image is 3, and the salient target ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an infrared and visible light image fusion method based on salient objects. The infrared and visible light image fusion method based on the salient objects includes following steps: building nonlinear scale space representation for an infrared image and a visible light image which respectively comprise a plurality of objects in a given scene; using a visual attention computational model to compute visual attention salient maps of the infrared image and the visible light image based on the nonlinear scale space representation of the images; using a return inhibition mechanism to select salient object areas from the infrared image and the visible light image based on the visual attention salient maps of the infrared image and the visible light image, and computing all salient object areas in the whole scene; performing rectification operation on the infrared image and the visible light image, using a pixel level fusion algorithm to perform fusion treatment on the salient object areas, and using a feature level fusion algorithm to perform fusion treatment on non-salient object areas; generating a fusion image of the infrared image and the visible light image of the whole scene by synthesizing results.

Description

technical field [0001] The invention relates to the technical field of multi-source image fusion, in particular to a method for multi-level fusion of infrared and visible light images based on salient objects. Background technique [0002] Image fusion (Image Fusion) refers to the comprehensive analysis technology of multi-resolution or multi-media image data through spatial registration and complementary image information to generate new images. Compared with single-sensor images, fusion images can maximize the use of information from various source images, improve resolution and clarity, increase image target perception sensitivity, perception distance and accuracy, anti-interference ability, etc., thereby reducing the difference in target perception. Completeness and uncertainty, improve the accuracy of the target four parts and the ability to explain the scene. The general process of image fusion is shown in the figure. First, perform some preprocessing operations on m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T5/50
Inventor 邵静秦晅卢旻昊
Owner THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products