Current Issue Cover
各向异性导向滤波的红外与可见光图像融合

刘明葳1, 王任华1, 李静1, 焦映臻2(1.中国人民公安大学信息网络安全学院, 北京 100038;2.华中师范大学数学与统计学院, 武汉 430079)

摘 要
目的 针对红外与可见光图像融合时易产生边缘细节信息丢失、融合结果有光晕伪影等问题,同时为充分获取多源图像的重要特征,将各向异性导向滤波和相位一致性结合,提出一种红外与可见光图像融合算法。方法 首先,采用各向异性导向滤波从源图像获得包含大尺度变化的基础图和包含小尺度细节的系列细节图;其次,利用相位一致性和高斯滤波计算显著图,进而通过对比像素显著性得到初始权重二值图,再利用各向异性导向滤波优化权重图,达到去除噪声和抑制光晕伪影;最后,通过图像重构得到融合结果。结果 从主客观两个方面,将所提方法与卷积神经网络(convolutional neural network,CNN)、双树复小波变换(dual-tree complex wavelet transform,DTCWT)、导向滤波(guided filtering,GFF)和各向异性扩散(anisotropic diffusion,ADF)等4种经典红外与可见光融合方法在TNO公开数据集上进行实验对比。主观分析上,所提算法结果在边缘细节、背景保存和目标完整度等方面均优于其他4种方法;客观分析上,选取互信息(mutual information,MI)、边缘信息保持度(degree of edge information,QAB/F)、熵(entropy,EN)和基于梯度的特征互信息(gradient based feature mutual information,FMI_gradient)等4种图像质量评价指数进行综合评价。相较于其他4种方法,本文算法的各项指标均有一定幅度的提高,MI平均值较GFF提高了21.67%,QAB/F平均值较CNN提高了20.21%,EN平均值较CNN提高了5.69%,FMI_gradient平均值较GFF提高了3.14%。结论 本文基于各向异性导向滤波融合算法可解决原始导向滤波存在的细节"光晕"问题,有效抑制融合结果中伪影的产生,同时具有尺度感知特性,能更好保留源图像的边缘细节信息和背景信息,提高了融合结果的准确性。
关键词
Infrared and visible image fusion with multi-scale anisotropic guided filtering

Liu Mingwei1, Wang Renhua1, Li Jing1, Jiao Yingzhen2(1.Department of Information and Cyber Security, People's Public Security University of China, Beijing 100038, China;2.School of Mathematics Statistics, Central China Normal University, Wuhan 430079, China)

Abstract
Objective Infrared (IR) images are based on the thermal radiation of the scene, and they are not susceptible to illumination and weather conditions. IR images are insensitive to the change of the brightness of the scene, and they usually have poor image quality and lack detailed information of the scene. By contrast, visible (VIS) images are sensitive to the optical information of the scene and contain a large amount of texture details. However, in low light and nighttime conditions, VIS images cannot capture the target clearly. IR and VIS images can provide complementary and redundancy information of a scene in the fusion image. Thus, image fusion is an important technique for image processing and computer vision applications such as feature extraction and target recognition. Multi-scale decomposition (MSD) has the advantage of extracting features at different scales, which is one of the most widely used image fusion methods. Many traditional multi-scale transform method signore the different image features of IR and VIS images. Therefore, traditional IR and VIS image fusion methods always lead to problems of missing the edge detail information and suppressing less halo. In this study, an IR and VIS image fusion algorithm based on anisotropic guide filter and phase congruency (PC) is proposed, which preserves edge details and suppresses artifacts effectively. Method The proposed scheme can not only preserve the details of source IR and VIS images, but also suppress the halo and artifacts effectively by combining the advantages of edge-preserving filter and PC. First, the input images are decomposed into a base layer and a series of detail layers. The base layer contains large scale variations in intensity, and the detail layers capture enough texture details by anisotropic guided filtering. Second, the saliency maps of the source images are calculated on the PC and Gaussian filter, and then, the binary weight maps are optimized by anisotropic guided filters of different scales, which can reduce noise and suppress halo. Finally, the fusion result is reconstructed by the base and detail layers by reconstruction rules. The main contributions of the proposed algorithm are as follows:1) An edge-preserving filter based on MSD is employed to extract the image features at different scales. The anisotropic guided filtering weights are optimized based on the local neighborhood variances to achieve strong anisotropic filtering. Thus, this operation cannot only extract the image's texture details and preserve its edge features, but also prevent the halo phenomenon at the edges. 2) A novel weight optimization based on space consistency is proposed, which can reduce the noise and make the surface smooth. The anisotropic guided filtering is used to optimize the weighting maps of each layer, which is obtained by multi-scale edge-preserving decomposition. Compared with the original guide filter, the anisotropic guided filtering addresses the disadvantages of detail halos and the handling of inconsistent structures existing in previous variants of the guided filter. The experimental results show that the proposed scheme cannot only make the detail information more prominent, but also suppress the artifacts effectively. 3) A PC operator is used instead of the Laplace operator to generate the saliency maps from source images because the PC operator is insensitive to variations of contrast and brightness. Result We test our method on the TNO image fusion dataset, which contains different military-relevant scenarios that are registered with different multiband camera systems (including Athena, DHV, FEL, and TRICLOBS). Fifteen typical image pairs are chosen to assess the performance of the proposed method and four classical methods. Four representative fusion methods, namely, convolutional neural network (CNN)-based method, dual tree complex wavelet transform (DTCWT)-based method, weighted average fusion algorithm based on guided filtering (GFF), and methods based on anisotropic diffusion and anisotropic diffusion (ADF), are used in this study. CNN is a representative of deep learning (DL)-based methods. DTCWT is a representative of wavelet-based methods. GFF and ADF are representative of edge-preserving filter-based methods. The experimental results demonstrate that the proposed method could effectively extract the target feature information and preserve the background information from source images. The subjective evaluation results show that the proposed method is superior to the other four methods in detail, background, and object representation. The proposed method shows clear advantages not only in subjective evaluation, but also in several objective evaluation metrics. In objective evaluation, four indices are selected, including mutual information (MI), degree of edge information (QAB/F), entropy (EN), and gradient-based mutual information (FMI_gradient).The objective evaluation results show that the proposed algorithm shows obvious advantages on the four metrics. The proposed algorithm has the largest MI and QAB/F values compared with the other four algorithms. This means that the proposed algorithm extracts much more edge and detail information than the other methods. In addition, our method has the best performance on EN and FMI_gradient values. Conclusion We proposed a new IR and VIS image fusion scheme by combining multi-scale edge-preserving decomposition with anisotropic guided filtering. The multi-scale edge-preserving decomposition can effectively extract the meaningful information from source images, and the anisotropic guided filtering can eliminate artifacts and detail "halos". In addition, to improve the performance of our method, we employ PC operator to obtain saliency maps. The proposed fusion algorithm can effectively suppress halo in fused results and can better retain the edge details and background information of the source image. The experimental results show that the proposed algorithm is more effective in preserving details that exist in VIS images and highlighting target information that exists in IR images compared with the other algorithms. The proposed method can be further improved by combining it with DL-based methods.
Keywords

订阅号|日报