双尺度分解和显著性分析相结合的红外与可见光图像融合
Dual-scale decomposition and saliency analysis based infrared and visible image fusion
- 2021年26卷第12期 页码:2813-2825
收稿:2020-08-03,
修回:2020-10-12,
录用:2020-10-19,
纸质出版:2021-12-16
DOI: 10.11834/jig.200405
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-08-03,
修回:2020-10-12,
录用:2020-10-19,
纸质出版:2021-12-16
移动端阅览
目的
2
针对图像融合中存在的目标信息减弱、背景细节不清晰、边缘模糊和融合效率低等不足,为了充分利用源图像的有用特征,将双尺度分解与基于视觉显著性的融合权重的思想融合在一起,提出了一种基于显著性分析和空间一致性的双尺度图像融合方法。
方法
2
利用均值滤波器对源图像进行双尺度分解,先后得到源图像的基层图像信息和细节层图像信息;对基层图像基于加权平均规则融合,对细节层图像先基于显著性分析得到初始权重图,再利用引导滤波优化得到的最终权重图指导加权;通过双尺度重建得到融合图像。
结果
2
根据传统方法与深度学习的不同特点,在TNO等公开数据集上从主观和客观两方面对所提方法进行评价。从主观分析来看,本文方法可以有效提取和融合源图像中的重要信息,得到融合质量高、视觉效果自然清晰的图像。从客观评价来看,实验验证了本文方法在提升融合效果上的有效性。与各种融合结果进行量化比较,在平均梯度、边缘强度、空间频率、特征互信息和交叉熵上的平均精度均为最优;与深度学习方法相比,熵、平均梯度、边缘强度、空间频率、特征互信息和交叉熵等指标均值分别提升了6.87%、91.28%、91.45%、85.10%、0.18%和45.45%。
结论
2
实验结果表明,所提方法不仅在目标、背景细节和边缘等信息的增强效果显著,而且能快速有效地利用源图像的有用特征。
Objective
2
Image fusion technology is of great significance for image recognition and comprehension. Infrared and visible image fusion has been widely applied in computer vision
target detection
video surveillance
military and many other areas. The weakened target
unclear background details
blurred edges and low fusion efficiency have been existing due to high algorithm complexity in fusion. The dual-scale methods can reduce the complexity of the algorithm and obtain satisfying results in the first level of decomposition itself compared to most multi-scale methods that require more than two decomposition levels
with utilizing the large difference of information on the two scales. However
insufficient extraction of salient features and neglect of the influence of noise which may lead to unexpected fusion effect. Dual-scale decomposition has been combined to the saliency analysis and spatial consistency for acquiring high-quality fusion of infrared and visible images.
Method
2
The visual saliency has been used to integrate the important and valuable information of the source images into the fused image. The spatial consistency has been fully considered to prevent the influence of noise on the fusion results. First
the mean filter has been used to filter the source image
to separate the high-frequency and low-frequency information in the image: the base image containing low-frequency information has been obtained first. The detail image containing high-frequency information has been acquired second via subtracting from the source image. Next
a simple weighted average fusion rule
that is
the arithmetic average rule
has been used to fuse the base image via the different sensitivity of the human visual system to the information of base image and detail image. The common features of the source images can be preserved and the redundant information of the fused base image can be reduced; For the detail image
the fusion weight based on visual saliency has been selected to guide the weighting. The saliency information of the image can be extracted using the difference between the mean and the median filter output. The saliency map of the source images can be obtained via Gaussian filter on the output difference. Therefore
the initial weight map has been constructed via the visual saliency. Furthermore
combined with the principle of spatial consistency
the initial weight map has been optimized based on guided filtering for the purpose of reducing noise and keeping the boundary aligned. The detail image can be fused under the guidance of the final weight map obtained. Therefore
the target
background details and edge information can be enhanced and the noise can be released. At last
the dual-scale reconstruction has been performed to obtain the final fused image of the fused base image and detail image.
Result
2
Based on the different characteristics of traditional and deep learning methods
two groups of different gray images from TNO and other public datasets have been opted for comparison experiments. The subjective and objective evaluations have been conducted with other methods to verify the effectiveness and superiority performance of the proposed method on the experimental platform MATLAB R2018a.The key prominent areas have been marked with white boxes in the results to fit the subjective analysis for illustrating the differences of the fused images in detail. The subjective analyzing method can comprehensively and accurately extract the information to obtain clear visual effect based on the source images and the fused image. First
the first group of experimental images and the effectiveness of the proposed method in improving the fusion effect can be verified on the aspect of objective evaluation. Next
the qualified average precision of average gradient
edge intensity
spatial frequency
feature mutual information and cross-entropy have been presented quantitatively
which are 3.990 7
41.793 7
10.536 6
0.446 0 and 1.489 7
respectively. At last
the proposed method has shown obvious advantages in the second group of experimental images compared with a deep learning method. The highest entropy has been obtained both. An average increase of 91.28%
91.45%
85.10%
0.18% and 45.45% in the above five metrics have been acquired respectively.
Conclusion
2
Due to the complexity of salient feature extraction and the uncertainty of noise in the fusion process
the extensive experiments have demonstrated that some existing fusion methods are inevitably limited
and the fusion effect cannot meet high-quality requirements of image processing. By contrast
the proposed method combining the dual-scale decomposition and the fusion weight based on visual saliency has achieved good results. The enhancement effect of the target
background details and edge information are particularly significant including anti-noise performance. High-quality fusion of multiple groups of images can be achieved quickly and effectively for providing the possibility of real-time fusion of infrared and visible images. The actual effect of this method has been more qualified in comparison with a fusion method based on deep learning framework. The further research method has been more universal and can be used to fuse multi-source and other multi-source and multi-mode images.
Achanta R, Estrada F, Wils P and Süsstrunk S. 2008. Salient region detection and segmentation//Proceedings of 2008 International Conference on Computer Vision Systems. Santorini, Greece: Springer: 66-75[ DOI: 10.1007/978-3-540-79547-6_7 http://dx.doi.org/10.1007/978-3-540-79547-6_7 ]
Bavirisetti D P and Dhuli R. 2016. Two-scale image fusion of visible and infrared images using saliency detection. Infrared Physics and Technology, 76: 52-64[DOI: 10.1016/j.infrared.2016.01.009]
Burt P J and Adelson E H. 1987. The laplacian pyramid as a compact image code//Fischler M A and Firschein O, eds. Readings in Computer Vision: Issues, Problem, Principles, and Paradigms. Amsterdam, the Netherlands: Elsevier: 671-679[ DOI: 10.1016/B978-0-08-051581-6.50065-9 http://dx.doi.org/10.1016/B978-0-08-051581-6.50065-9 ]
Chen M S. 2016. Image fusion of visual and infrared image based on NSCT and compressed sensing. Journal of Image and Graphics, 21(1): 39-44
陈木生. 2016. 结合NSCT和压缩感知的红外与可见光图像融合. 中国图象图形学报, 21(1): 39-44[DOI: 10.11834/jig.20160105]
Cui G M, Feng H J, Xu Z H, Li Q and Chen Y T. 2015. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications, 341: 199-209[DOI: 10.1016/j.optcom.2014.12.032]
Eskicioglu A M and Fisher P S. 1995. Image quality measures and their performance. IEEE Transactions on Communications, 43(12): 2959-2965[DOI: 10.1109/26.477498]
Haghighat M B A, Aghagolzadeh A and Seyedarabi H. 2011. A non-reference image fusion metric based on mutual information of image features. Computers and Electrical Engineering, 37(5): 744-756[DOI: 10.1016/j.compeleceng.2011.07.012]
Li H, Wu X J and Kittler J. 2018. Infrared and visible image fusion using a deep learning framework//Proceedings of the 24th International Conference on Pattern Recognition. Beijing, China: IEEE: 2705-2710[ DOI: 10.1109/ICPR.2018.8546006 http://dx.doi.org/10.1109/ICPR.2018.8546006 ]
Li S T, Kang X D and Hu J W. 2013. Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7): 2864-2875[DOI: 10.1109/TIP.2013.2244222]
Ma J L, Zhou Z Q, Wang B and Zong H. 2017. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics and Technology, 82: 8-17[DOI: 10.1016/j.infrared.2017.02.005]
Ma J Y, Ma Y and Li C. 2019. Infrared and visible image fusion methods and applications: a survey. Information Fusion, 45: 153-178[DOI: 10.1016/j.inffus.2018.02.004]
Ma T, Ma J, Fang B, Hu F Y, Quan S W and Du H J. 2018. Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis. Infrared Physics and Technology, 92: 154-162[DOI: 10.1016/j.infrared.2018.06.002]
Pajares G and de la Cruz J M. 2004. A wavelet-based image fusion tutorial. Pattern Recognition, 37(9): 1855-1872[DOI: 10.1016/j.patcog.2004.03.010]
Qi H S, Rong C Z, Xiao L M and Yue Z J. 2019. Infrared-and-visible-image fusion algorithm based on dual-tree complex wavelet transform and guided filtering. Communications Technology, 52(2): 330-336
齐海生, 荣传振, 肖力铭, 岳振军. 2019. 基于双树复小波变换与引导滤波的红外与可见光图像融合算法. 通信技术, 52(2): 330-336[DOI: 10.3969/j.issn.1002-0802.2019.02.012]
Roberts J W, van Aardt J A and Ahmed F B. 2008. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2(1): #023522[DOI: 10.1117/1.2945910]
Zhang B H, Lu X Q, Pei H Q and Zhao Y. 2015. A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform. Infrared Physics and Technology, 73: 286-297[DOI: 10.1016/j.infrared.2015.10.004]
Zhang Y, Zhang L J, Bai X Z and Zhang L. 2017. Infrared and visual image fusion through infrared feature extraction and visual information preservation. Infrared Physics and Technology, 83: 227-237[DOI: 10.1016/j.infrared.2017.05.007]
Zhao C and Huang Y D. 2019. Infrared and visible image fusion via rolling guidance filtering and hybrid multi-scale decomposition. Laser and Optoelectronics Progress, 56(14): #141007
赵程, 黄永东. 2019. 基于滚动导向滤波和混合多尺度分解的红外与可见光图像融合方法. 激光与光电子学进展, 56(14): #141007[DOI: 10.3788/LOP56.141007]
Zhou Z Q, Wang B, Li S and Dong M J. 2016. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Information Fusion, 30: 15-26[DOI: 10.1016/j.inffus.2015.11.003]
Zhu H R, Liu Y Q and Zhang W Y. 2019. Infrared and visible image fusion based on iterative guided filtering and multi-visual weight information. Acta Photonica Sinica, 48(3): #0310002
朱浩然, 刘云清, 张文颖. 2019. 基于迭代导向滤波与多视觉权重信息的红外与可见光图像融合. 光子学报, 48(3): #0310002[DOI:10.3788/gzxb20194803.0310002]
相关作者
相关机构
京公网安备11010802024621