各向异性导向滤波的红外与可见光图像融合
Infrared and visible image fusion with multi-scale anisotropic guided filtering
- 2021年26卷第10期 页码:2421-2432
收稿:2020-07-16,
修回:2020-8-10,
录用:2020-8-17,
纸质出版:2021-10-16
DOI: 10.11834/jig.200339
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-07-16,
修回:2020-8-10,
录用:2020-8-17,
纸质出版:2021-10-16
移动端阅览
目的
2
针对红外与可见光图像融合时易产生边缘细节信息丢失、融合结果有光晕伪影等问题,同时为充分获取多源图像的重要特征,将各向异性导向滤波和相位一致性结合,提出一种红外与可见光图像融合算法。
方法
2
首先,采用各向异性导向滤波从源图像获得包含大尺度变化的基础图和包含小尺度细节的系列细节图;其次,利用相位一致性和高斯滤波计算显著图,进而通过对比像素显著性得到初始权重二值图,再利用各向异性导向滤波优化权重图,达到去除噪声和抑制光晕伪影;最后,通过图像重构得到融合结果。
结果
2
从主客观两个方面,将所提方法与卷积神经网络(convolutional neural network,CNN)、双树复小波变换(dual-tree complex wavelet transform,DTCWT)、导向滤波(guided filtering,GFF)和各向异性扩散(anisotropic diffusion,ADF)等4种经典红外与可见光融合方法在TNO公开数据集上进行实验对比。主观分析上,所提算法结果在边缘细节、背景保存和目标完整度等方面均优于其他4种方法;客观分析上,选取互信息(mutual information,MI)、边缘信息保持度(degree of edge information,
Q
AB/F
)、熵(entropy,EN)和基于梯度的特征互信息(gradient based feature mutual information,FMI_gradient)等4种图像质量评价指数进行综合评价。相较于其他4种方法,本文算法的各项指标均有一定幅度的提高,MI平均值较GFF提高了21.67%,
Q
AB/F
平均值较CNN提高了20.21%,EN平均值较CNN提高了5.69%,FMI_gradient平均值较GFF提高了3.14%。
结论
2
本文基于各向异性导向滤波融合算法可解决原始导向滤波存在的细节"光晕"问题,有效抑制融合结果中伪影的产生,同时具有尺度感知特性,能更好保留源图像的边缘细节信息和背景信息,提高了融合结果的准确性。
Objective
2
Infrared (IR) images are based on the thermal radiation of the scene
and they are not susceptible to illumination and weather conditions. IR images are insensitive to the change of the brightness of the scene
and they usually have poor image quality and lack detailed information of the scene. By contrast
visible (VIS) images are sensitive to the optical information of the scene and contain a large amount of texture details. However
in low light and nighttime conditions
VIS images cannot capture the target clearly. IR and VIS images can provide complementary and redundancy information of a scene in the fusion image. Thus
image fusion is an important technique for image processing and computer vision applications such as feature extraction and target recognition. Multi-scale decomposition (MSD) has the advantage of extracting features at different scales
which is one of the most widely used image fusion methods. Many traditional multi-scale transform method signore the different image features of IR and VIS images. Therefore
traditional IR and VIS image fusion methods always lead to problems of missing the edge detail information and suppressing less halo. In this study
an IR and VIS image fusion algorithm based on anisotropic guide filter and phase congruency (PC) is proposed
which preserves edge details and suppresses artifacts effectively.
Method
2
The proposed scheme can not only preserve the details of source IR and VIS images
but also suppress the halo and artifacts effectively by combining the advantages of edge-preserving filter and PC. First
the input images are decomposed into a base layer and a series of detail layers. The base layer contains large scale variations in intensity
and the detail layers capture enough texture details by anisotropic guided filtering. Second
the saliency maps of the source images are calculated on the PC and Gaussian filter
and then
the binary weight maps are optimized by anisotropic guided filters of different scales
which can reduce noise and suppress halo. Finally
the fusion result is reconstructed by the base and detail layers by reconstruction rules. The main contributions of the proposed algorithm are as follows: 1) An edge-preserving filter based on MSD is employed to extract the image features at different scales. The anisotropic guided filtering weights are optimized based on the local neighborhood variances to achieve strong anisotropic filtering. Thus
this operation cannot only extract the image's texture details and preserve its edge features
but also prevent the halo phenomenon at the edges. 2) A novel weight optimization based on space consistency is proposed
which can reduce the noise and make the surface smooth. The anisotropic guided filtering is used to optimize the weighting maps of each layer
which is obtained by multi-scale edge-preserving decomposition. Compared with the original guide filter
the anisotropic guided filtering addresses the disadvantages of detail halos and the handling of inconsistent structures existing in previous variants of the guided filter. The experimental results show that the proposed scheme cannot only make the detail information more prominent
but also suppress the artifacts effectively. 3) A PC operator is used instead of the Laplace operator to generate the saliency maps from source images because the PC operator is insensitive to variations of contrast and brightness.
Result
2
We test our method on the TNO image fusion dataset
which contains different military-relevant scenarios that are registered with different multiband camera systems (including Athena
DHV
FEL
and TRICLOBS). Fifteen typical image pairs are chosen to assess the performance of the proposed method and four classical methods. Four representative fusion methods
namely
convolutional neural network (CNN)-based method
dual tree complex wavelet transform (DTCWT)-based method
weighted average fusion algorithm based on guided filtering (GFF)
and methods based on anisotropic diffusion and anisotropic diffusion (ADF)
are used in this study. CNN is a representative of deep learning (DL)-based methods. DTCWT is a representative of wavelet-based methods. GFF and ADF are representative of edge-preserving filter-based methods. The experimental results demonstrate that the proposed method could effectively extract the target feature information and preserve the background information from source images. The subjective evaluation results show that the proposed method is superior to the other four methods in detail
background
and object representation. The proposed method shows clear advantages not only in subjective evaluation
but also in several objective evaluation metrics. In objective evaluation
four indices are selected
including mutual information (MI)
degree of edge information (
Q
AB/F
)
entropy (EN)
and gradient-based mutual information (FMI_gradient).The objective evaluation results show that the proposed algorithm shows obvious advantages on the four metrics. The proposed algorithm has the largest MI and
Q
AB/F
values compared with the other four algorithms. This means that the proposed algorithm extracts much more edge and detail information than the other methods. In addition
our method has the best performance on EN and FMI_gradient values.
Conclusion
2
We proposed a new IR and VIS image fusion scheme by combining multi-scale edge-preserving decomposition with anisotropic guided filtering. The multi-scale edge-preserving decomposition can effectively extract the meaningful information from source images
and the anisotropic guided filtering can eliminate artifacts and detail "halos". In addition
to improve the performance of our method
we employ PC operator to obtain saliency maps. The proposed fusion algorithm can effectively suppress halo in fused results and can better retain the edge details and background information of the source image. The experimental results show that the proposed algorithm is more effective in preserving details that exist in VIS images and highlighting target information that exists in IR images compared with the other algorithms. The proposed method can be further improved by combining it with DL-based methods.
Adu J H, Wang M H, Wu Z Y and Hu J. 2012. Infrared image and visible light image fusion based on nonsubsampled contourlet transform and the gradient of uniformity. International Journal of Advancements in Computing Technology, 4(5): 114-121[DOI:10.4156/ijact.vol4.issue5.14]
Bavirisetti D P and Dhuli R. 2016. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform. IEEE Sensors Journal, 16(1): 203-209[DOI:10.1109/JSEN.2015.2478655]
Bulanon D M, Burks T F and Alchanatis V. 2009. Image fusion of visible and thermal images for fruit detection. Biosystems Engineering, 103(1): 12-22[DOI:10.1016/j.biosystemseng.2009.02.009]
He K M, Sun J and Tang X O. 2013. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(6): 1397-1409[DOI:10.1109/TPAMI.2012.213]
Hu J W and Li S T. 2012. The multiscale directional bilateral filter and its application to multisensor image fusion. Information Fusion, 13(3): 196-206[DOI:10.1016/j.inffus.2011.01.002]
Lewis J J, O'Callaghan R J, Nikolov S G, Bull D R and Canagarajah N. 2007. Pixel- and region-based image fusion with complex wavelets. Information Fusion, 8(2): 119-130[DOI:10.1016/j.inffus.2005.09.006]
Li H, Liu L, Huang W and Yue C. 2016. An improved fusion algorithm for infrared and visible images based on multi-scale transform. Infrared Physics and Technology, 74: 28-37[DOI:10.1016/j.infrared.2015.11.002]
Li S T, Kang X D and Hu J W. 2013. Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7): 2864-2875[DOI:10.1109/TIP.2013.2244222]
Li Z, Li J Z, Hu Y J and Zhang Y. 2019. Mixed prior and weighted guided filter image dehazing algorithm. Journal of Image and Graphics, 24(2): 170-179
李喆, 李建增, 胡永江, 张岩. 2019. 混合先验与加权引导滤波的图像去雾算法. 中国图象图形学报, 24(2): 170-179[DOI:10.11834/jig.180450]
Liu C H, Qi Y and Ding W R. 2017. Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Physics and Technology, 83: 94-102[DOI:10.1016/j.infrared.2017.04.018]
Liu Y, Chen X, Cheng J, Peng H and Wang Z F. 2018. Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing, 16(3): #1850018[DOI:10.1142/S0219691318500182]
Ochotorena N C and Yamashita Y. 2020. Anisotropic guided filtering. IEEE Transactions on Image Processing, 29: 1397-1412[DOI:10.1109/TIP.2019.2941326]
Toet A. 2014. TNO image fusion dataset[EB/OL]. [2020-06-23] . http://doi.org./10.6084/m9.figshare.1008029.v1 http://doi.org./10.6084/m9.figshare.1008029.v1
Toet A and Hogervorst M A. 2016. Multiscale image fusion through guided filtering//Proceedings of Volume 9997, Target and Background Signatures II. Edinburgh, UK: SPIE: #99970[ DOI: 10.1117/12.2239945 http://dx.doi.org/10.1117/12.2239945 ]
Xie W, Zhou Y Q and You M. 2016. Improved guided image filtering integrated with gradient information. Journal of Image and Graphics, 21(9): 1119-1126
谢伟, 周玉钦, 游敏. 2016. 融合梯度信息的改进引导滤波. 中国图象图形学报, 21(9): 1119-1126 [DOI:10.11834/jig.20160901]
Yang H, Wu X T, He B G and Zhu M. 2015. Image fusion based on multiscale guided filters. Journal of Optoelectronics·Laser, 26(1): 170-176
杨航, 吴笑天, 贺柏根, 朱明. 2015. 基于多尺度导引滤波的图像融合方法. 光电子·激光, 26(1): 170-176[DOI:10.16136/j.joel.2015.01.0628]
Zhang L, Zhang L, Mou X Q and Zhang D. 2011. FSIM: a feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8): 2378-2386[DOI:10.1109/TIP.2011.2109730]
Zhang Y X, Wei W and Yuan Y T. 2019. Multi-focus image fusion with alternating guided filtering. Signal, Image and Video Processing, 13(4): 727-735[DOI:10.1007/s11760-018-1402-x]
Zhao C and Huang Y D. 2019. Infrared and visible image fusion via rolling guidance filtering and hybrid multi-scale decomposition. Laser and Optoelectronics Progress, 56(14): #141007
赵程, 黄永东. 2019. 基于滚动导向滤波和混合多尺度分解的红外与可见光图像融合方法. 激光与光电子学进展, 56(14): #141007[DOI:10.3788/LOP56.141007]
Zhu H R, Liu Y Q and Zhang W Y. 2018. Infrared and visible image fusion based on contrast enhancement and multi-scale edge-preserving decomposition. Journal of Electronics and Information Technology, 40(6): 1294-1300
朱浩然, 刘云清, 张文颖. 2018. 基于对比度增强与多尺度边缘保持分解的红外与可见光图像融合. 电子与信息学报, 40(6): 1294-1300[DOI:10.11999/JEIT170956]
相关作者
相关机构
京公网安备11010802024621