Infrared and visible light video image fusion method based on Surfacelet conversion

A technology of video image and fusion method, which is applied in image communication, image enhancement, image data processing, etc., can solve problems, increase the difficulty of implementing video image fusion technology, and the detection accuracy is easily affected by environmental factors such as lighting.

Inactive Publication Date: 2010-10-27
XIDIAN UNIV
View PDF3 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This type of technology first uses moving object detection technology to divide each frame image in the video image into a moving object area and a background area, and then uses different fusion rules to fuse the background area and target area of ​​each frame image to obtain a fused video image, such as Z.H. Wang, Z. Qin, "A framework of region-based dynamic image fusion", Journal of Zhejiang University Science A, Vol.8, No.1, 2007, pp: 56-62. This type of fusion technology can effectively solve the first Time consistency and stability problems in similar video image fusion technology, but moving object detection technology must be used as a preprocessing step, and moving object detection technology is also a relatively difficult technology in the field of video image processing, and its detection accuracy is easy Affected by environmental factors such as lighting, this also increases the difficulty of implementing this type of video image fusion technology to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Infrared and visible light video image fusion method based on Surfacelet conversion
  • Infrared and visible light video image fusion method based on Surfacelet conversion
  • Infrared and visible light video image fusion method based on Surfacelet conversion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0033] refer to figure 1 , the present invention comprises the following steps:

[0034] Step 1: Use Surfacelet transform to decompose the input video image in multiple scales and directions.

[0035] Perform Surfacelet transformation on the infrared video image Vir(x, y, t) and visible light video image Vvi(x, y, t) that have undergone strict spatial and temporal registration, and obtain their respective transformation coefficients and in, and represent the low-frequency subband coefficients of the infrared input video image and the visible light input video image at the roughest scale S, respectively, and Respectively represent infrared video image and visible light video image in scale s (s=1, 2, ..., S), direction (j, k) (wherein, j=1, 2, 3; ) and space-time position (x, y, t) at the band-pass direction sub-band coefficient, S represents sca...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an infrared and visible light video image fusion method based on Surfacelet conversion, which mainly solves the problem of poor time consistency and stability of a fusion video image in the prior art. The method comprises the following steps of: firstly, carrying out multi-scale and multidirectional decomposition on an input video image by adopting Surfacelet conversion to obtain subband coefficients with different frequency domains; then, combining a low frequency subband coefficient with a band-pass direction subband coefficient of the input video image by using a fusion method of combining selection and weighted average based on three-dimensional partial space-time domain energy matching and a fusion method of combining energy and direction vector standard variance based on a three-dimensional partial space-time domain to obtain the low frequency subband coefficient and the band-pass direction subband coefficient of the fusion video image; and finally, carrying out Surfacelet conversion on each subband coefficient obtained by combination to obtain the fusion video image. The invention has the advantages of good fusion effect, high time consistency and stability and low noise sensitivity and can be used for field safety monitoring.

Description

technical field [0001] The invention relates to the field of video image processing, in particular to a video image fusion method, which can effectively solve the time consistency and stability problems existing in the video image fusion, and can be used to fuse infrared and visible light video images. Background technique [0002] The visible light imaging sensor mainly images according to the spectral reflectance characteristics of the object, while the infrared imaging sensor mainly performs imaging according to the thermal radiation characteristics of the object. Usually, the visible light image can well describe the environmental information in the scene, while the infrared image can well give the existence characteristics and position characteristics of the target. The fusion of these two images can organically combine the characteristics of the target in the infrared image and the background information in the visible light image, thereby further improving the ability...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/262G06T5/00
Inventor 张强王龙马兆坤李慧娟
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products