双重字典学习与自适应PCNN相结合的医学图像融合
Medical image fusion using double dictionary learning and adaptive PCNN
- 2019年24卷第9期 页码:1588-1603
收稿:2019-01-22,
修回:2019-3-16,
纸质出版:2019-09-16
DOI: 10.11834/jig.180667
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-01-22,
修回:2019-3-16,
纸质出版:2019-09-16
移动端阅览
目的
2
针对基于稀疏编码的医学图像融合方法存在的细节保存能力不足的问题,提出了一种基于卷积稀疏表示双重字典学习与自适应脉冲耦合神经网络(PCNN)的多模态医学图像融合方法。
方法
2
首先通过已配准的训练图像去学习卷积稀疏与卷积低秩子字典,在两个字典下使用交替方向乘子法(ADMM)求得其卷积稀疏表示系数与卷积低秩表示系数,通过与对应的字典重构得到卷积稀疏与卷积低秩分量;然后利用改进的的拉普拉斯能量和(NSML)以及空间频率和(NMSF)去激励PCNN分别对卷积稀疏与卷积低秩分量进行融合;最后将融合后的卷积稀疏与卷积低秩分量进行组合得到最终的融合图像。
结果
2
对灰度图像与彩色图像进行实验仿真并与其他融合方法进行比较,实验结果表明,所提出的融合方法在客观评估和视觉质量方面明显优于对比的6种方法,在4种指标上都有最优的表现;与6种多模态图像融合方法相比,3组实验平均标准差分别提高了7%、10%、5.2%;平均互信息分别提高了33.4%、10.9%、11.3%;平均空间频率分别提高了8.2%、9.6%、5.6%;平均边缘评价因子分别提高了16.9%、20.7%、21.6%。
结论
2
与其他稀疏表示方法相比,有效提高了多模态医学图像融合的质量,更好地保留了源图像的细节信息,使融合图像的信息更加丰富,符合人眼的视觉特性,有效地辅助医生进行疾病诊断。
Objective
2
The fusion of multimodal medical images is an important medical imaging method that integrates complementary information from multimodal images to produce new composite images. Sparse representation has achieved great success in medical image fusion in the past few years. However
given that the sparse representation method is based on sliding window technology
the ability to preserve the details of the fused image is insufficient. Therefore
a multimodal medical image fusion method based on convolution sparse representation double dictionary learning and adaptive PCNN(pulse couple neural network) is proposed.
Method
2
According to the low-rank and sparsity characteristics of the image
the method decomposes the source image into two parts and constructs a double dictionary based on convolution sparse representation. The sparse component contains a large amount of detail textures
and the low-rank component contains basic information such as contour brightness. First
the low-rank feature and sparse feature are extracted from the training image to form two basic dictionaries to represent the test image. The dictionary learning model is improved by adding low-rank and sparse constraints to the low-rank component and the sparse component
respectively
to enhance the discriminability of the double dictionary. In the process of dictionary learning
the method of alternating iterative updating is divided into three parts:auxiliary variable update
sparse coding
and dictionary updates. A convolutional sparse and convoluted low-rank sub-dictionary for the training image is obtained by a three-part cyclic update. Then
the total variation regularization is incorporated into the image decomposition model
and the Fourier domain-based alternating direction multiplier method is used to obtain the representation coefficients of the source image sparse component and the low-rank component in the respective sub-dictionaries. The process is alternately divided into two parts iteratively
namely
convolution sparse coefficient update and convolution low-rank coefficient update. Second
the sparse component of the source image is obtained by convolving the convolutional sparse coefficient with the corresponding sub-dictionary. Similarly
the convolution low-rank coefficient is convolved with the corresponding sub-dictionary to obtain the low-rank component of the source image. The novel sum-modified spatial frequency of the sparse component is calculated as the external excitation of the pulse-coupled neural network to preserve the details of the image
and the link strength is adaptively determined by the regional average gradient to obtain a firing map of the sparse component. The novel sum-modified Laplacian of the low-rank component is calculated as the external excitation of the pulse coupled neural network
and the link strength is adaptively determined by the regional average gradient to obtain the firing map. The fused sparse components are obtained by comparing the number of firings of different sparse components. Similarly
the low-rank components of different source images are fused through the firing map. Finally
the fused image is obtained by combining convolution sparse and convolution low-rank components
thereby further improving the quality of the fused image.
Result
2
Three sets of brain multimodal medical images (namely
CT/MR
MR/PET
and MR/SPECT) were simulated and compared with those processed by other fusion methods. Experimental results show that the proposed fusion method is significantly superior to the six methods according to objective evaluation and visual quality comparison and has the best performance in four indicators. Compared with the six multi-mode image fusion methods
the mean standard deviation of the three groups of experiments increased by 7%
10%
and 5.2%
respectively. The average mutual information increased by 33.4%
10.9%
and 11.3%
respectively. The average spatial frequency increased by 8.2%
9.6%
and 5.6%
respectively. The average marginal evaluation factors increased by 16.9%
20.7%
and 21.6%
respectively.
Conclusion
2
Compared with other sparse representation methods
the proposed algorithm effectively improves the quality of multimodal medical image fusion
better preserves the detailed information of the source image
enriches the information of the fused image
and conforms to the visual characteristics of the human eye
thereby effectively assisting doctors in diagnosing diseases.
Li S T, Kang X D, Hu J W, et al. Image matting for fusion of multi-focus images in dynamic scenes[J]. Information Fusion, 2013, 14(2):147-162.[DOI:10.1016/j.inffus.2011.07.001]
Lou J Q, Li J F, Dai W Z. Medical image fusion using non-subsampled shearlet transform[J]. Journal of Image and Graphics, 2017, 22(11):1574-1583.
楼建强, 李俊峰, 戴文战.非下采样剪切波变换的医学图像融合[J].中国图象图形学报, 2017, 22(11):1574-1583.[DOI:10.11834/jig.170014]
Li H, Manjunath B S, Mitra S K. Multi-sensor image fusion using the wavelet transform[C]//Proceedings of the 1st International Conference on Image Processing. Austin, TX, USA: IEEE, 1994: 51-55.[ DOI: 10.1109/ICIP.1994.413273 http://dx.doi.org/10.1109/ICIP.1994.413273 ]
Do M N, Vetterli M. The contourlet transform:an efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing, 2005, 14(12):2091-2106.[DOI:10.1109/TIP.2005.859376]
Kong W, Liu J P. Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network[J]. Optical Engineering, 2013, 52(1):017001.[DOI:10.1117/1.OE.52.1.017001]
Zhang B H, Lu X Q, Jia W T. A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain[J]. Optik, 2013, 124(20):4104-4109.[DOI:10.1016/j.ijleo.2012.12.032]
Bhatnagar G, Wu Q M J, Liu Z. Directive contrast based multimodal medical image fusion in NSCT domain[J]. IEEE Transactions on Multimedia, 2013, 15(5):1014-1024.[DOI:10.1109/TMM.2013.2244870]
Luo X Q, Zhang Z C, Zhang B C, et al. Image fusion with contextual statistical similarity and nonsubsampled shearlet transform[J]. IEEE Sensors Journal, 2017, 17(6):1760-1771.[DOI:10.1109/JSEN.2016.2646741]
Donoho D L. Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52(4):1289-1306.[DOI:10.1109/TIT.2006.871582]
Zhu Z Q, Chai Y, Yin H P, et al. A novel dictionary learning approach for multi-modality medical image fusion[J]. Neurocomputing, 2016, 214:471-482.[DOI:10.1016/j.neucom.2016.06.036]
Dong X, Wang L F, Qin P L, et al. CT/MR brain image fusion method via improved coupled dictionary learning[J]. Journal of Computer Applications, 2017, 37(6):1722-1727, 1746.
董侠, 王丽芳, 秦品乐, 等.改进耦合字典学习的脑部CT/MR图像融合方法[J].计算机应用, 2017, 37(6):1722-1727, 1746.[DOI:10.11772/j.issn.1001-9081.2017.06.1722]
Zhang X, Xue Y J, Tu S Q, et al. Remote sensing image fusion based on structural group sparse representation[J]. Journal of Image and Graphics, 2016, 21(8):1106-1118.
张晓, 薛月菊, 涂淑琴, 等.基于结构组稀疏表示的遥感图像融合[J].中国图象图形学报, 2016, 21(8):1106-1118.[DOI:10.11834/jig.20160815]
Zhang H, Patel V M. Convolutional sparse and low-rank coding-based image decomposition[J]. IEEE Transactions on Image Processing, 2018, 27(5):2121-2133.[DOI:10.1109/TIP.2017.2786469]
Gu S H, Zuo W M, Xie Q, et al. Convolutional sparse coding for image super-resolution[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 1823-1831.[ DOI: 10.1109/ICCV.2015.212 http://dx.doi.org/10.1109/ICCV.2015.212 ]
Rong Y, Xiong S W, Gao Y S. Low-rank double dictionary learning from corrupted data for robust image classification[J]. Pattern Recognition, 2017, 72:419-432.[DOI:10.1016/j.patcog.2017.06.038]
Yin H P, Li Y X, Chai Y, et al. A novel sparse-representation-based multi-focus image fusion approach[J]. Neurocomputing, 2016, 216:216-229.[DOI:10.1016/j.neucom.2016.07.039]
Zong J J, Qiu T S. Medical image fusion based on sparse representation of classified image patches[J]. Biomedical Signal Processing and Control, 2017, 34:195-205.[DOI:10.1016/j.bspc.2017.02.005]
Zhang Q, Levine M D. Robust multi-focus image fusion using multi-task sparse representation and spatial context[J]. IEEE Transactions on Image Processing, 2016, 25(5):2045-2058.[DOI:10.1109/TIP.2016.2524212]
Garcia-Cardona C, Wohlberg B. Convolutional dictionary learning:a comparative review and new algorithms[J]. IEEE Transactions on Computational Imaging, 2018, 4(3):366-381.[DOI:10.1109/TCI.2018.2840334]
Bristow H, Eriksson A, Lucey S. Fast convolutional sparse coding[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013: 391-398.[ DOI: 10.1109/CVPR.2013.57 http://dx.doi.org/10.1109/CVPR.2013.57 ]
Wohlberg B. Efficient algorithms for convolutional sparse representations[J]. IEEE Transactions on Image Processing, 2016, 25(1):301-315.[DOI:10.1109/TIP.2015.2495260]
Chai Y, Li H F, Guo M Y. Multifocus image fusion scheme based on features of multiscale products and pcnn in lifting stationary wavelet domain[J]. Optics Communications, 2011, 284(5):1146-1158.[DOI:10.1016/j.optcom.2010.10.056]
Wohlberg B. Efficient convolutional sparse coding[C]//Proceedings of 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing. Florence, Italy: IEEE, 2014: 7173-7177.[ DOI: 10.1109/ICASSP.2014.6854992 http://dx.doi.org/10.1109/ICASSP.2014.6854992 ]
Gabay D, Mercier B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation[J]. Computers&Mathematics with Applications, 1976, 2(1):17-40.[DOI:10.1016/0898-1221(76)90003-1]
Cai J F, Candès E J, Shen Z W. A singular value thresholding algorithm for matrix completion[J]. SIAM Journal on Optimization, 2010, 20(4):1956-1982.[DOI:10.1137/080738970]
Zhang Y D, Wang S H, Ji G L, et al. Exponential wavelet iterative shrinkage thresholding algorithm with random shift for compressed sensing magnetic resonance imaging[J]. Information Sciences, 2015, 10(1):116-117.[DOI:10.1002/tee.22059]
Lin Z C, Chen M M, Wu L Q, et al. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices[R]. Urbana: Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, 2009.
The Whole Brain Atlas of Harvard Medical School.Accessed: Nov. 2, 2015.[EB/OL] .[2019-01-22] http://www.med.harvard.edu/AANLIB/ http://www.med.harvard.edu/AANLIB/ .
Liu Y, Liu S P, Wang Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24:147-164.[DOI:10.1016/j.inffus.2014.09.004]
Liu Y, Wang Z F. Simultaneous image fusion and denoising with adaptive sparse representation[J]. IET Image Processing, 2015, 9(5):347-357.[DOI:10.1049/iet-ipr.2014.0311]
Liu W. Adaptive medical image fusion method based on NSCT and unit-linking PCNN[C]//Proceedings of International Conference on Computer Engineering, Information Science & Application Technology. Atlantis: Atlantis Press, 2016.[ DOI: 10.2991/iccia-16.2016.86 http://dx.doi.org/10.2991/iccia-16.2016.86 ]
Kim M, Han D K, Ko H. Joint patch clustering-based dictionary learning for multimodal image fusion[J]. Information Fusion, 2016, 27:198-214.[DOI:10.1016/j.inffus.2015.03.003]
Yang B, Li S T. Multifocus image fusion and restoration with sparse representation[J]. IEEE Transactions on Instrumentation and Measurement, 2010, 59(4):884-892.[DOI:10.1109/TIM.2009.2026612]
Zhang Q, Guo B L. Multifocus image fusion using the nonsubsampled contourlet transform[J]. Signal Processing, 2009, 89(7):1334-1346.[DOI:10.1016/j.sigpro.2009.01.012]
Shi W Z, Zhu C Q, Tian Y, et al. Wavelet-based image fusion and quality assessment[J]. International Journal of Applied Earth Observation and Geoinformation, 2005, 6(3-4):241-251.[DOI:10.1016/j.jag.2004.10.010]
Qu G H, Zhang D L, Yan P F. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7):313-315.[DOI:10.1049/el:20020212]
Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4):308-309.[DOI:10.1049/el:20000267]
相关作者
相关机构
京公网安备11010802024621