Current Issue Cover
多方向Laplacian能量和与tetrolet变换的图像融合

沈瑜, 陈小朋, 杨倩(兰州交通大学电子与信息工程学院, 兰州 730070)

摘 要
目的 红外与可见光图像融合算法大部分可以达到认知场景的目的,但是无法对场景中的细节特征进行更加细致的刻画。为进一步提高场景辨识度,提出一种基于tetrolet变换的多尺度几何变换图像融合算法。方法 首先,将红外与可见光图像映射到tetrolet变换域,并将二者分解为低频系数和高频系数。然后,对低频系数,将区域能量理论与传统的加权法相结合,利用区域能量的多变性和区域像素的相关性,自适应地选择加权系数进行融合;对高频系数,利用改进的多方向拉普拉斯算子方法计算拉普拉斯能量和,再引入区域平滑度为阈值设定高频系数融合规则。最后,将融合所得新的低频和高频系数进行图像重建得到融合结果。结果 在kaptein、street和road等3组红外与可见光图像上,与轮廓波变换(contourlet transformation,CL)、离散小波变换(discrete wavelet transformation,DWT)和非下采样轮廓波变换(nonsubsampled contourlet transformation,NSCT)等3种方法的融合结果进行比较,主观评判上,本文算法融合结果在背景、目标物以及细节体现方面均优于其他3种方法;客观指标上,本文算法相较于其他3种方法,运行时间较NSCT方法提升了0.37 s,平均梯度(average gradient,AvG)值和空间频率(spatial frequency,SF)值均有大幅提高,提高幅度最大为5.42和2.75,峰值信噪比(peak signal to noise ratio,PSNR)值、信息熵(information entropy,IE)值和结构相似性(structural similarity index,SSIM)值分别提高0.25、0.12和0.19。结论 本文提出的红外与可见光图像融合算法改善了融合图像的细节刻画,使观察者对场景的理解能力有所提升。
关键词
Image fusion of multidirectional sum modified Laplacian and tetrolet transform

Shen Yu, Chen Xiaopeng, Yang Qian(School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China)

Abstract
Objective Image fusion is an important form of information fusion, which is widely used in image understanding and computer vision. It combines multiple images that are described in the same scene in different forms to obtain accurate and comprehensive information processing. The fused image can provide effective information for subsequent image processing to some extent. Among them, infrared and visible image fusion is a hot issue in image fusion. By combining the background information in the visible light image with the target features in the infrared image, the information of the two images can be fully fused, which can describe comprehensively and accurately, improve the target features and background recognition in the scene, and enhance people's perception and understanding of the image. General infrared and visible image fusion algorithms can achieve the purpose of cognitive scenes but cannot reflect the detailed features of the scene in a detailed way to further improve the scene identification to provide effective information for subsequent image processing. Aiming at such problems, this study proposes a tetrolet-based multiscale geometric transformation fusion algorithm to improve the shortcomings of existing algorithms. The tetrolet transform divides the source image into several image blocks and transforms each image block to obtain low-frequency coefficients and high-frequency coefficients. The low frequency and high frequency coefficients of all image blocks are arranged and integrated into an image matrix to obtain the low frequency and high frequency coefficients of the source image. Method First, the infrared and visible light images are mapped to the tetrolet transform domain, and the two images are correspondingly subjected to tetrolet transformation. According to the four-lattice patchwork filling theory, the best filling method is selected based on the criterion of the maximum first-order norm among the 117 filling methods. In this way, the respective low-frequency coefficients and high-frequency coefficients of the infrared and visible images are calculated. Then, the low-frequency coefficients of the two are combined with the theory of regional energy and the traditional weighting method. By taking advantage of the variability of regional energy and the correlation of regional pixels, the weighting coefficients are adaptively selected for fusion to obtain the fused low-frequency coefficients according to the constant change of the central pixel. For the high-frequency coefficients of the two images, the traditional Laplace energy only according to the up, down, left, and right four Laplace operators of the direction is calculated. Considering that the pixel points in the diagonal direction also contribute to the calculation of the sum-modified-Laplacian, this study uses the improved eight-direction Laplace operator calculation method to calculate the Laplace energy and introduce the regional smoothness as the threshold value. If the sum-modified-Laplacian is above the threshold value, the weighted coefficient is calculated according to smoothness and threshold value to carry out weighted fusion. Otherwise, the fusion rule is set according to the maximum and minimum values of sum-modified-Laplacian of the two high-frequency components to obtain the high-frequency coefficient after fusion. Finally, the low-frequency and high-frequency coefficients obtained after the fusion are reconstructed to obtain the fused image. Result The fusion results of three sets of infrared and visible images are compared with the contourlet transformation (CL), discrete wavelet transformation (DWT), and nonsubsampled contourlet transformation (NSCT) methods. From the perspective of visual effect, the fusion image of the algorithm in this study is superior to the other three methods in image background, scene object, and detail embodiment. In terms of objective indicators, the running time required by the algorithm in this study is 0.37 s shorter than that of the NSCT method compared with the other three methods. In addition, the average gradient (AvG) and spatial frequency (SF) values of the fused images are greatly improved, with the maximum increases of 5.42 and 2.75, respectively. In addition, the peak signal to noise ratio (PSNR), information entropy (IE), and structural similarity index (SSIM) values are slightly increased, with the improvement ranges of 0.25, 0.12, and 0.19, respectively. The experimental results show that the proposed algorithm in this study improves the fusion image of effect and quality to a certain extent. Conclusion This work proposes an infrared and visible image fusion method based on regional energy and improved multidirectional Laplace energy. The infrared image and visible light image are mapped into the transform domain by tetrolet transformation, which is decomposed into low frequency coefficient and high frequency. The fusion of the low-frequency coefficients is carried out based on the regional energy theory and the adaptive weighted fusion criterion. According to the improved Laplace energy and the regional smoothness, the high-frequency coefficients of the infrared and visible images are selected to achieve the fusion of the high-frequency coefficients. The fusion results of low frequency and high frequency coefficients are obtained by inverse transformation. Compared with the fusion results of the other three transform domain algorithms, the fused images not only enhance the background information but also remarkably improve the embodiment of the details in the scene. This condition has certain advantages in objective evaluation indexes, such as average gradient and peak signal-to-noise ratio. The observer's ability to understand the scene has been improving.
Keywords

订阅号|日报