多方向Laplacian能量和与tetrolet变换的图像融合
Image fusion of multidirectional sum modified Laplacian and tetrolet transform
- 2020年25卷第4期 页码:721-731
收稿:2019-07-03,
修回:2019-9-16,
录用:2019-9-23,
纸质出版:2020-04-16
DOI: 10.11834/jig.190311
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-07-03,
修回:2019-9-16,
录用:2019-9-23,
纸质出版:2020-04-16
移动端阅览
目的
2
红外与可见光图像融合算法大部分可以达到认知场景的目的,但是无法对场景中的细节特征进行更加细致的刻画。为进一步提高场景辨识度,提出一种基于tetrolet变换的多尺度几何变换图像融合算法。
方法
2
首先,将红外与可见光图像映射到tetrolet变换域,并将二者分解为低频系数和高频系数。然后,对低频系数,将区域能量理论与传统的加权法相结合,利用区域能量的多变性和区域像素的相关性,自适应地选择加权系数进行融合;对高频系数,利用改进的多方向拉普拉斯算子方法计算拉普拉斯能量和,再引入区域平滑度为阈值设定高频系数融合规则。最后,将融合所得新的低频和高频系数进行图像重建得到融合结果。
结果
2
在kaptein、street和road等3组红外与可见光图像上,与轮廓波变换(contourlet transformation,CL)、离散小波变换(discrete wavelet transformation,DWT)和非下采样轮廓波变换(nonsubsampled contourlet transformation,NSCT)等3种方法的融合结果进行比较,主观评判上,本文算法融合结果在背景、目标物以及细节体现方面均优于其他3种方法;客观指标上,本文算法相较于其他3种方法,运行时间较NSCT方法提升了0.37 s,平均梯度(average gradient,AvG)值和空间频率(spatial frequency,SF)值均有大幅提高,提高幅度最大为5.42和2.75,峰值信噪比(peak signal to noise ratio,PSNR)值、信息熵(information entropy,IE)值和结构相似性(structural similarity index,SSIM)值分别提高0.25、0.12和0.19。
结论
2
本文提出的红外与可见光图像融合算法改善了融合图像的细节刻画,使观察者对场景的理解能力有所提升。
Objective
2
Image fusion is an important form of information fusion
which is widely used in image understanding and computer vision. It combines multiple images that are described in the same scene in different forms to obtain accurate and comprehensive information processing. The fused image can provide effective information for subsequent image processing to some extent. Among them
infrared and visible image fusion is a hot issue in image fusion. By combining the background information in the visible light image with the target features in the infrared image
the information of the two images can be fully fused
which can describe comprehensively and accurately
improve the target features and background recognition in the scene
and enhance people's perception and understanding of the image. General infrared and visible image fusion algorithms can achieve the purpose of cognitive scenes but cannot reflect the detailed features of the scene in a detailed way to further improve the scene identification to provide effective information for subsequent image processing. Aiming at such problems
this study proposes a tetrolet-based multiscale geometric transformation fusion algorithm to improve the shortcomings of existing algorithms. The tetrolet transform divides the source image into several image blocks and transforms each image block to obtain low-frequency coefficients and high-frequency coefficients. The low frequency and high frequency coefficients of all image blocks are arranged and integrated into an image matrix to obtain the low frequency and high frequency coefficients of the source image.
Method
2
First
the infrared and visible light images are mapped to the tetrolet transform domain
and the two images are correspondingly subjected to tetrolet transformation. According to the four-lattice patchwork filling theory
the best filling method is selected based on the criterion of the maximum first-order norm among the 117 filling methods. In this way
the respective low-frequency coefficients and high-frequency coefficients of the infrared and visible images are calculated. Then
the low-frequency coefficients of the two are combined with the theory of regional energy and the traditional weighting method. By taking advantage of the variability of regional energy and the correlation of regional pixels
the weighting coefficients are adaptively selected for fusion to obtain the fused low-frequency coefficients according to the constant change of the central pixel. For the high-frequency coefficients of the two images
the traditional Laplace energy only according to the up
down
left
and right four Laplace operators of the direction is calculated. Considering that the pixel points in the diagonal direction also contribute to the calculation of the sum-modified-Laplacian
this study uses the improved eight-direction Laplace operator calculation method to calculate the Laplace energy and introduce the regional smoothness as the threshold value. If the sum-modified-Laplacian is above the threshold value
the weighted coefficient is calculated according to smoothness and threshold value to carry out weighted fusion. Otherwise
the fusion rule is set according to the maximum and minimum values of sum-modified-Laplacian of the two high-frequency components to obtain the high-frequency coefficient after fusion. Finally
the low-frequency and high-frequency coefficients obtained after the fusion are reconstructed to obtain the fused image.
Result
2
The fusion results of three sets of infrared and visible images are compared with the contourlet transformation (CL)
discrete wavelet transformation (DWT)
and nonsubsampled contourlet transformation (NSCT) methods. From the perspective of visual effect
the fusion image of the algorithm in this study is superior to the other three methods in image background
scene object
and detail embodiment. In terms of objective indicators
the running time required by the algorithm in this study is 0.37 s shorter than that of the NSCT method compared with the other three methods. In addition
the average gradient (AvG) and spatial frequency (SF) values of the fused images are greatly improved
with the maximum increases of 5.42 and 2.75
respectively. In addition
the peak signal to noise ratio (PSNR)
information entropy (IE)
and structural similarity index (SSIM) values are slightly increased
with the improvement ranges of 0.25
0.12
and 0.19
respectively. The experimental results show that the proposed algorithm in this study improves the fusion image of effect and quality to a certain extent.
Conclusion
2
This work proposes an infrared and visible image fusion method based on regional energy and improved multidirectional Laplace energy. The infrared image and visible light image are mapped into the transform domain by tetrolet transformation
which is decomposed into low frequency coefficient and high frequency. The fusion of the low-frequency coefficients is carried out based on the regional energy theory and the adaptive weighted fusion criterion. According to the improved Laplace energy and the regional smoothness
the high-frequency coefficients of the infrared and visible images are selected to achieve the fusion of the high-frequency coefficients. The fusion results of low frequency and high frequency coefficients are obtained by inverse transformation. Compared with the fusion results of the other three transform domain algorithms
the fused images not only enhance the background information but also remarkably improve the embodiment of the details in the scene. This condition has certain advantages in objective evaluation indexes
such as average gradient and peak signal-to-noise ratio. The observer's ability to understand the scene has been improving.
Ding S F, Zhao X Y, Xu H, Zhu Q B and Xue Y. 2018. NSCT-PCNN image fusion based on image gradient motivation. IET Computer Vision, 12(4):377-383[DOI:10.1049/iet-cvi.2017.0285]
Feng X. 2019. Fusion of infrared and visible images based on Tetrolet framework. Acta Photonica Sinica, 48(2):0210001
冯鑫. 2019. Tetrolet框架下红外与可见光图像融合.光子学报, 48(2):0210001[DOI:10.3788/gzxb20194802.0210001]
Gao J S, Dong Y N, Shen Y and Zhang C L. 2015. Research of image fusion algorithm based on improved Tetrolet transform. Computer Science, 42(5):320-322
高继森, 董亚楠, 沈瑜, 张春兰. 2015.基于改进Tetrolet变换的图像融合算法研究.计算机科学, 42(5):320-322[DOI:10.11896/j.issn.1002-137X.2015.5.065]
He K J, Zhou D M, Zhang X J, Nie R C, Wang Q and Jin X. 2017. Infrared and visible image fusion based on target extraction in the nonsubsampled contourlet transform domain.Journal of Applied Remote Sensing, 11(1):015011[DOI:10.1117/1.JRS.11.015011]
Hou R C, Zhou D M, Nie R K, Liu D and Ruan X L. 2019. Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model. Medical and Biological Engineering and Computing, 57(4):887-900[DOI:10.1007/s11517-018-1935-8]
Hsia C H, Yang J H and Chiang J S. 2018. Complexity reduction method for ultrasound imaging enhancement in Tetrolet transform domain[EB/OL].[2019-07-01] . https://link.springer.com/10.1007%2Fs11227-018-2240-x#article-info https://link.springer.com/10.1007%2Fs11227-018-2240-x#article-info
Huang W and Jing Z L. 2007. Evaluation of focus measures in multi-focus image fusion. Pattern Recognition Letters, 28(4):493-500[DOI:10.1016/j.patrec.2006.09.005]
Huang Y, Zhang D X, Yuan B H and Kang J Z. 2017. Fusion of visible and infrared image based on stationary Tetrolet transform//Proceedings of the 32nd Youth Academic Annual Conference of Chinese Association of Automation. Hefei, China: IEEE: 854-859[ DOI: 10.1109/yac.2017.7967529 http://dx.doi.org/10.1109/yac.2017.7967529 ]
Jin X, Jiang Q, Yao S W, Zhou D M, Nie R C, Hai J J and He K J. 2017. A survey of infrared and visual image fusion methods. Infrared Physics and Technology, 85:478-501[DOI:10.1016/j.infrared.2017.07.010]
Krommweh J. 2010. Tetrolet transform:a new adaptive Haar wavelet algorithm for sparse image representation. Journal of Visual Communication and Image Representation, 21(4):364-374[DOI:10.1016/j.jvcir.2010.02.011]
Li C L, Sun J X and Kang Y H. 2010. Adaptive image thresholding denoising based on Tetrolet transform. Natural Science Journal of Hainan University, 28(4):348-352, 357
李财莲, 孙即祥, 康耀红. 2010.基于Tetrolet变换的自适应阈值去噪.海南大学学报(自然科学版), 28(4):348-352, 357[DOI:10.15886/j.cnki.hdxbzkb.2010.04.016]
Liu J N, Jin W Q, Li L and Wang X. 2016. Visible and infrared thermal image fusion algorithm based on self-adaptive reference image. Spectroscopy and Spectral Analysis, 36(12):3907-3914
刘佳妮, 金伟其, 李力, 王霞. 2016.自适应参考图像的可见光与热红外彩色图像融合算法.光谱学与光谱分析, 36(12):3907-3914[DOI:10.3964/j.issn.1000-0593(2016)12-3907-08]
Liu K, Guo L, Li H H and Chen J S. 2009. Fusion of infrared and visible light images based on region segmentation. Chinese Journal of Aeronautics, 22(1):75-80[DOI:10.1016/S1000-9361(08)60071-0]
Lyu L L, Zhao J and Sun H. 2010. Multi-focus image fusion based on shearlet and local energy//Proceedings of the 2nd International Conference on Signal Processing Systems. Dalian, China: IEEE: 632-635[ DOI: 10.1109/icsps.2010.5555456 http://dx.doi.org/10.1109/icsps.2010.5555456 ]
Qu X B, Yan J W and Yang G D. 2009. Multifocus image fusion method of sharp frequency localized Contourlet transform domain based on sum-modified-Laplacian. Optics and Precision Engineering, 17(5):1203-1212
屈小波, 闫敬文, 杨贵德. 2009.改进拉普拉斯能量和的尖锐频率局部化Contourlet域多聚焦图像融合方法.光学精密工程, 17(5):1203-1212[DOI:10.3321/j.issn:1004-924X.2009.05.038]
Shen Y, Dang J W, Wang Y P and Wang B W. 2017. A color underwater image clearness algorithm based on Tetrolet transform. Acta Optica Sinica, 37(9):89-100
沈瑜, 党建武, 王阳萍, 王博伟. 2017.基于Tetrolet变换的彩色水下图像清晰化算法.光学学报, 37(9):89-100[DOI:10.3788/AOS201737.0910002]
Srivastava R, Prakash O and Khare A. 2016. Local energy-based multimodal medical image fusion in curvelet domain. IET Computer Vision, 10(6):513-527[DOI:10.1049/iet-cvi.2015.0251]
Sun X L, Wang Z Y, Fu Y Q, Yi Y and He X H. 2015. Fast image fusion based on sum of modified Laplacian. Computer Engineering and Applications, 51(5):193-197
孙晓龙, 王正勇, 符耀庆, 易云, 何小海. 2015.基于改进拉普拉斯能量和的快速图像融合.计算机工程与应用, 51(5):193-197[DOI:10.3778/j.issn.1002-8331.1305-0017]
Wang R and Du L F. 2014. Infrared and visible image fusion based on random projection and sparse representation. International Journal of Remote Sensing, 35(5):1640-1652[DOI:10.1080/01431161.2014.880819]
Zhang C J, Chen Y, Duanmu C J and Feng H J. 2014. Multi-channel satellite cloud image fusion in the tetrolet transform domain. International Journal of Remote Sensing, 35(24):8138-8168[DOI:10.1080/01431161.2014.980918]
Zhang C J, Chen Y, Duanmu C J and Yang Y H. 2016. Image denoising by using PDE and GCV in Tetrolet transform domain. Engineering Applications of Artificial Intelligence, 48:204-229[DOI:10.1016/j.engappai.2015.10.008]
Zhang L H. 2018. The Research on Image Enhancement Algorithm Based on NSST and Tetrolet Transform. Urumqi: Xinjiang University
张兰花. 2018.基于NSST和Tetrolet变换的图像增强算法的研究.乌鲁木齐: 新疆大学
相关作者
相关机构
京公网安备11010802024621