Infrared image and visible-light image fusion method based on saliency region segmentation

A technology of region segmentation and image fusion, applied in the field of image fusion, it can solve the problem that the effect of region segmentation is not obvious, and achieve the effect of being more targeted, improving quality, and increasing information content.

Inactive Publication Date: 2013-10-23
PEKING UNIV SHENZHEN GRADUATE SCHOOL
View PDF2 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for scenes lacking targets, the region

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Infrared image and visible-light image fusion method based on saliency region segmentation
  • Infrared image and visible-light image fusion method based on saliency region segmentation
  • Infrared image and visible-light image fusion method based on saliency region segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] Below through embodiment and accompanying drawing, the present invention is described in detail.

[0028] figure 1 It is a flow chart of the infrared and visible light image fusion method based on salient region segmentation in this embodiment, and its specific implementation steps are as follows:

[0029] 1. Separate the infrared and visible light images into regions.

[0030] Here, a saliency detection-based method is used for region division. A salient area in an image usually has a higher contrast in color or brightness than its surrounding area. Therefore, by calculating the contrast between an area and the surrounding area, the salient area of ​​the area can be determined.

[0031] The contrast of each area in the image can be obtained by calculating the difference between the gray level of each pixel in the image and the gray level of the pixels in the surrounding area, such as formula (1):

[0032] c ( i ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an infrared image and visible-light image fusion method based on salient region segmentation. The method comprises the steps that (1) region segmentation is conducted on an infrared image and a visible-light image through a saliency detection method; (2) joint region expression is conducted on the images after region segmentation; (3) multi-scale and multi-direction contourlet decomposition is respectively conducted on the infrared image and the visible-light image; (4) a low-pass sub-band fusion coefficient and a band-pass sub-band direction fusion coefficient are determined according to a fusion rule corresponding to different region selections of the images after joint region expression; (5) coefficient reconstruction is carried out to acquire a fusion image of the infrared image and the visible-light image. According to the method, thermal radiation characteristics of an infrared image and scene detail characteristics of the visible-light image can be reserved in a fusion result, and accordingly the amount of information contained in the fusion image is increased and the quality of the fusion image is improved.

Description

technical field [0001] The invention relates to an infrared and visible light image fusion method based on salient region segmentation, and belongs to the technical field of image fusion. Background technique [0002] Image fusion is the information fusion of images as the research object. Its purpose is to fuse images of the same target or scene from different sensors, or images obtained by the same sensor with different imaging methods, into one image. The information of the original image can be comprehensively reflected in the fused image, and the target or scene can be comprehensively described, making it more suitable for human visual perception or computer processing. Image fusion can help the imaging system expand the scope of time and space, reduce the uncertainty of the system, improve the reliability of the system, and enhance the robustness of the imaging system. [0003] According to the stage of fusion in the processing process, according to the degree of abst...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50
Inventor 刘宏李泽辉丁润伟
Owner PEKING UNIV SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products