医学图像融合方法综述
A review of medical image fusion methods
- 2023年28卷第1期 页码:118-143
纸质出版日期: 2023-01-16 ,
录用日期: 2022-08-22
DOI: 10.11834/jig.220603
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-01-16 ,
录用日期: 2022-08-22
移动端阅览
黄渝萍, 李伟生. 医学图像融合方法综述[J]. 中国图象图形学报, 2023,28(1):118-143.
Yuping Huang, Weisheng Li. A review of medical image fusion methods[J]. Journal of Image and Graphics, 2023,28(1):118-143.
多模态医学图像能够为医疗诊断、治疗规划和手术导航等临床应用提供更为全面和准确的医学图像描述。由于疾病的类型多样且复杂,无法通过单一模态的医学图像进行疾病类型诊断和病灶定位,而多模态医学图像融合方法可以解决这一问题。融合方法获得的融合图像具有更丰富全面的信息,可以辅助医学影像更好地服务于临床应用。为了对医学图像融合方法的现状进行全面研究,本文对近年国内外发表的相关文献进行综述。对医学图像融合技术进行分类,将融合方法分为传统方法和深度学习方法两类并总结其优缺点。结合多模态医学图像成像原理和各类疾病的图像表征,分析不同部位、不同疾病的融合方法的相关技术并进行定性比较。总结现有多模态医学图像数据库,并按分类对25项常见的医学图像融合质量评价指标进行概述。总结22种基于传统方法和深度学习领域的多模态医学图像融合算法。此外,本文进行实验,比较基于深度学习与传统的医学图像融合方法的性能,通过对3组多模态医学图像融合结果的定性和定量分析,总结各技术领域医学图像融合算法的优缺点。最后,对医学图像融合技术的现状、重难点和未来展望进行讨论。
Multimodal medical-fused images are essential to more comprehensive and accurate medical image descriptions for various clinical applications like medical diagnosis
treatment planning
and surgical navigation. However
single-modal medical images is challenged to deal with diagnose disease types and localize lesions due to its variety and complexity of disease types.As a result
multimodal medical image fusion methods are focused on obtaining medical images with rich information in clinical applications. Medical-based imaging techniques are mainly segmented into electromagnetic energy-based and acoustic energy-based. To achieve the effect of real-time imaging and provide dynamic images
the latter one uses the multiple propagation speed of ultrasound in different media. Current medical image fusion techniques are mainly concerned of static images in terms of electromagnetic energy imaging techniques. For example
it is related to some key issues like X-ray computed tomography imaging
single photon emission computed tomography
positron emission tomography and magnetic resonance imaging. We review recent literature-relevant based on the current status of medical image fusion methods. Our critical analysis can divide current medical image fusion techniques into two categories: 1) traditional methods and 2) deep learning methods. Nowadays
spatial domain and frequency domain-based algorithms are very proactive for traditional medical image fusion methods. The spatial domain techniques are implemented for the evaluation of image element values via prior pixel-level strategies
and the images-fused can realize less spatial distortion and a lower signal-to-noise ratio. The spatial domain-based methods are included some key aspects like 1) simple min/max
2) independent component analysis
3) principal component analysis
4) weighted average
5) simple average
6) fuzzy logic
and 7) cloud model. The fusion process of spatial domain-based methods is quite simple
and its algorithm complexity can lower the computation cost. It also has a relatively good performance in alleviating the spectral distortion of fused images. However
the challenging issue is called for their fusion results better in terms of clarity
contrast and continuous lower spatial resolution. In the frequency domain
the input image is first converted from the null domain to the frequency domain via Fourier transform computation
and the fusion algorithm is then applied to the image-converted to obtain the final fused image
followed by the inversed Fourier transform. The commonly-used fusion algorithms in the frequency domain are composed of 1) pyramid transform
2) wavelet transform and 3) multi-scale geometric transform fusion algorithms. This multi-level decomposition based methods can enhance the detail retention of the fused image. The output fusion results contain high spatial resolution and high quality spectral components. However
this type of algorithm is derived from a fine-grained fusion rule design as well. The deep learning-based methods are mainly related to convolutional neural networks (CNN) and generative adversarial networks (GAN)
which can avoid fine-grained fusion rule design
reduce the manual involvement in the process
and their stronger feature extraction capability enables their fusion results to retain more source image information. The CNN can be used to process the spatial and structural information effectively in the neighborhood of the input image. It consists of a series of convolutional layers
pooling layers and fully connected layers. The convolution layer and pooling layer can extract the features in the source image
and the fully connected layer can complete the mapping from the features to the final output. In CNN
image fusion is regarded as a classification problem
corresponding to the process of feature extraction
feature option and output prediction. The fusion task is targeted on image transformation
activity level measurement and fusion rule design as well. Different from CNN
GAN network can be used to model saliency information in medical images through adversarial learning mechanism. GAN is a generative model with two multilayer networks
the first network mentioned is a generator-used to generate pseudo data
and the second following network is a discriminator-used to classify images into real data and pseudo data. The back-propagation-based training mode can improve the ability of GAN to distinguish between real data and generated data. Although GAN is not as widely used in multi-model medical image fusion (MMIF) as CNN
it has the potential for in-depth research. A completed overview of existing multimodal medical image databases and fusion quality evaluation metrics is developed further. Four open-source freely accessible medical image databases are involved in
such as the open access series of imaging studies (OASIS) dataset
the cancer immunome atlas (TCIA) dataset
the whole brain atlas (AANLIB) dataset
and the Alzheimer' s disease neuroimaging initiative (ANDI) dataset. And
a gene database for green fluorescent protein and phase contrast images are included as well
called the John Innes centre (JIC) dataset. Our critical review is based on the summary of 25 commonly-used medical image fusion result evaluation indicators in four types of metrics: 1) information theory-based; 2) image feature-based; 3) image structural similarity-based and 4) human visual perception-based
as well as 22 fusion algorithms for medical image datasets in recent years. The pros and cons of the algorithms are analyzed in terms of the technical-based comparison
fusion modes and evaluation indexes of each algorithm. In addition
our review is carried out on a large number of experiments to compare the performance of deep learning-based and traditional medical image fusion methods. Source images of three modal pairs are tested qualitatively and quantitatively via 22 multimodal medical image fusion algorithms. For qualitative analysis
the brightness
contrast and distortion of the fused image are observed based on the human vision system. For quantitative-based analysis
15 objective evaluation indexes are used. By analyzing the qualitative and quantitative results
some critical analyses are discussed based on the current situation
challenging issues and future direction of medical image fusion techniques. Both of the traditional and deep learning methods have promoted fusion performance to a certain extent. More medical image fusion methods with good fusion effect and high model robustness are illustrated in the context of the algorithm optimization and the enrichment of medical image data sets. And
the two technical fields will continue to be developed towards the common research trends of expanding the multi-facet and multi-case medical images
proposing effective indicators suitable for medical image fusion
and deepening the research scope of image fusion.
多模态医学图像医学图像融合深度学习医学图像数据库质量评价指标
multimodel medical imagemedical image fusiondeep learningmedical image databasequality evaluation metrics
Akhonda M A B S, Gabrielson B, Bhinge S, Calhoun V D and Adali T. 2021. Disjoint subspaces for common and distinct component analysis: application to the fusion of multi-task FMRI data. Journal of Neuroscience Methods, 358: #109214 [DOI: 10.1016/j.jneumeth.2021.109214]
Ashwanth B and Swamy K V. 2020. Medical image fusion using transform techniques//Proceedings of the 5th International Conference on Devices, Circuits and Systems. Coimbatore, India: IEEE: 303-306 [DOI: 10.1109/ICDCS48716.2020.243604http://dx.doi.org/10.1109/ICDCS48716.2020.243604]
Azam M A, Khan K B, Slahuddin S, Rehman E, Khan S A, Khan M A, Kadry S and Gandomi A H. 2022. A review on multimodal medical image fusion: compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Computers in Biology and Medicine, 144: #105253 [DOI: 10.1016/j.compbiomed.2022.105253]
Bhateja V, Krishn A, Patel H and Sahu A. 2015. Medical image fusion in wavelet and ridgelet domains: a comparative evaluation. International Journal of Rough Sets and Data Analysis, 2(2): 78-91 [DOI: 10.4018/IJRSDA.2015070105]
Bulanon D M, Burks T F and Alchanatis V. 2009. Image fusion of visible and thermal images for fruit detection. Biosystems Engineering, 103(1): 12-22 [DOI: 10.1016/j.biosystemseng.2009.9.02.009]
Chen M J, Zheng H, Lu C S, Tu E M, Yang J and Kasabov N. 2018. A spatio-temporal fully convolutional network for breast lesion segmentation in DCE-MRI//Proceedings of the 25th International Conference on Neural Information Processing. Siem Reap, Cambodia: Springer: 358-368 [DOI: 10.1007/978-3-030-04239-4_32http://dx.doi.org/10.1007/978-3-030-04239-4_32]
Chen Y and Blum R S. 2009. A new automated quality assessment algorithm for image fusion. Image and Vision Computing, 27(10): 1421-1432 [DOI: 10.1016/j.imavis.2007.12.002]
Cui G M, Feng H J, Xu Z H, Li Q and Chen Y T. 2015. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications, 341: 199-209 [DOI: 10.1016/j.optocm.2014.12.032]
Daniel E. 2018. Optimum wavelet-based homomorphic medical image fusion using hybrid genetic-grey wolf optimization algorithm. IEEE Sensors Journal, 18(16): 6804-6811 [DOI: 10.1109/JSEN.2018.2822712]
Do M N and Vetterli M. 2005. The contourlet transform: an efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 14(12): 2091-2106 [DOI: 10.1109/TIP.2005.859376]
Dogra A, Goyal B and Agrawal S. 2017. From multi-scale decomposition to non-multi-scale decomposition methods: a comprehensive survey of image fusion techniques and its applications. IEEE Access, 5: 16040-16067 [DOI: 10.1109/ACCESS.2017.2735865]
Du J, Li W S, Lu K and Xiao B. 2016. An overview of multi-modal medical image fusion. Neurocomputing, 215: 3-20 [DOI: 10.1016/j.neucom.2015.07.160]
Du J, Li W S and Tan H. 2020a. Three-layer medical image fusion with tensor-based features. Information Sciences, 525: 93-108 [DOI: 10.1016/j.ins.2020.03.051]
Du J, Li W S and Tan H L. 2020b. Three-layer image representation by an enhanced illumination-based image fusion method. IEEE Journal of Biomedical and Health Informatics, 24(4): 1169-1179 [DOI: 10.1109/JBHI.2019.2930978]
Easley G, Labate D and Lim W Q. 2008. Sparse directional image representations using the discrete shearlet transform. Applied and Computational Harmonic Analysis, 25(1): 25-46 [DOI: 10.1016/j.acha.2007.09.003]
El-Gamal F E Z A, Elmogy M and Atwan A. 2016. Current trends in medical image registration and fusion. Egyptian Informatics Journal, 17(1): 99-124 [DOI: 10.1016/j.eij.2015.09.002]
Eskicioglu A M and Fisher P S. 1995. Image quality measures and their performance. IEEE Transactions on Communications, 43(12): 2959-2965 [DOI: 10.1109/26.477498]
Faragallah O S, Muhammed A N, Taha T S and Geweid G G N. 2021. PCA based SVD fusion for MRI and CT medical images. Journal of Intelligent and Fuzzy Systems, 41(2): 4021-4033 [DOI: 10.3233/JIFS-202884]
Fu J, Li W S, Du J and Huang Y P. 2021a. A multiscale residual pyramid attention network for medical image fusion. Biomedical Signal Processing and Control, 66: #102488 [DOI: 10.1016/j.bspc.2021.102488]
Fu J, Li W S, Du J and Xu L M. 2021b. DSAGAN: a generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion. Information Sciences, 576: 484-506 [DOI: 10.1016/j.ins.2021.06.083]
Gao Y, Ma S W, Liu J J, Liu Y Y and Zhang X X. 2021. Fusion of medical images based on salient features extraction by PSO optimized fuzzy logic in NSST domain. Biomedical Signal Processing and Control, 69: #102852 [DOI: 10.1016/j.bspc.2021.102852]
Guo K H and Labate D. 2007. Optimally sparse multidimensional representation using shearlets. Siam Journal on Mathematical Analysis, 39(1): 298-318 [DOI: 10.1137/060649781]
Han Y, Cai Y Z, Cao Y and Xu X M. 2013. A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2): 127-135 [DOI: 10.1016/j.inffus.2011.2011.08.002]
Hermessi H, Mourali O and Zagrouba E. 2018. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Computing and Applications, 30(7): 2029-2045 [DOI: 10.1007/s00521-018-3441-1]
Islam Z U, Singh V and Verma N K. 2019. Feature learning using stacked autoencoder for multimodal fusion, shared and cross learning on medical Images//Proceedings of 2019 IEEE Bombay Section Signature Conference. Mumbai, India: IEEE: 1-6 [DOI: 10.1109/IBSSC47189.2019.8973087http://dx.doi.org/10.1109/IBSSC47189.2019.8973087]
Jagalingam P and Hegde A V. 2015. A review of quality metrics for fused image. Aquatic Procedia, 4: 133-142 [DOI: 10.1016/j.aqpro.2015.02.019]
James A P and Dasarathy B V. 2014. Medical image fusion: a survey of the state of the art. Information Fusion, 19: 4-19 [DOI: 10.1016/j.inffus.2013.12.002]
Lahoud F and Süsstrunk S. 2019. Zero-learning fast medical image fusion//Proceedings of the 22nd International Conference on Information Fusion. Ottawa, Canada: IEEE: 1-8 [DOI: 10.23919/FUSION43075.2019.9011178http://dx.doi.org/10.23919/FUSION43075.2019.9011178]
Lewis J J, O'Callaghan R J, Nikolov S G, Bull D R and Canagarajah N. 2007. Pixel-and region-based image fusion with complex wavelets. Information Fusion, 8(2): 119-130 [DOI: 10.1016/j.inffus.2005.09.006]
Li B, Peng H, Luo X H, Wang J, Song X X, Pérez-Jiménez M J and Riscos-Núeez A. 2021a. Medical image fusion method based on coupled neural P systems in nonsubsampled shearlet transform domain. International Journal of Neural Systems, 31(1): #2050050 [DOI: 10.1142/S0129065720500501]
Li B, Peng H and Wang J. 2021b. A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images. Signal Processing, 178: #107793 [DOI: 10.1016/j.sigpro.2020.107793]
Li B C, Li R C, Liu Z F, Li C L and Wang Z M. 2019a. An objective non-reference metric based on arimoto entropy for assessing the quality of fused images. Entropy, 21(9): #879 [DOI: 10.3390/e21090879]
Li S S, Hong R C and Wu X Q. 2008. A novel similarity based quality metric for image fusion//Proceedings of 2008 International Conference on Audio, Language and Image Processing. Shanghai, China: IEEE: 167-172 [DOI: 10.1109/ICALIP.2008.4589989http://dx.doi.org/10.1109/ICALIP.2008.4589989]
Li W S, Chao F F, Wang G F, Fu J and Peng X X. 2022a. Medical image fusion based on local Laplacian decomposition and iterative joint filter. International Journal of Imaging Systems and Technology, 32(5): 1631-164 [DOI: 10.1002/ima.22714]
Li W S, Du J, Zhao Z M and Long J Y. 2019b. Fusion of medical sensors using adaptive cloud model in local Laplacian pyramid domain. IEEE Transactions on Biomedical Engineering, 66(4): 1172-1183 [DOI: 10.1109/TBME.2018.2869432]
Li W S, Li R Y, Fu J and Peng X X. 2022b. MSENet: a multi-scale enhanced network based on unique features guidance for medical image fusion. Biomedical Signal Processing and Control, 74: #103534 [DOI: 10.1016/j.bspc.2022.103534]
Li W S, Peng X X, Fu J, Wang G F, Huang Y P and Chao F F. 2022c. A multiscale double-branch residual attention network for anatomical-functional medical image fusion. Computers in Biology and Medicine, 141: #105005 [DOI: 10.1016/j.compbiomed.2021.105005]
Li X S, Zhou F Q and Tan H S. 2021c. Joint image fusion and denoising via three-layer decomposition and sparse representation. Knowledge-based Systems, 224: #107087 [DOI: 10.1016/j.knosys.2021.107087]
Li X S, Zhou F Q, Tan H S, Zhang W N and Zhao C Y. 2021d. Multimodal medical image fusion based on joint bilateral filter and local gradient energy. Information Sciences, 569: 302-325 [DOI: 10.1016/j.ins.2021.04.052]
Li X X, Guo X P, Han P F, Wang X, Li H G and Luo T. 2020. Laplacian redecomposition for multimodal medical image fusion. IEEE Transactions on Instrumentation and Measurement, 69(9): 6880-6890 [DOI: 10.1109/TIM.2020.2975405]
Liu A M, Lin W S and Narwaria M. 2012a. Image quality assessment based on gradient similarity. IEEE Transactions on Image Processing, 21(4): 1500-1512 [DOI: 10.1109/TIP.2011.2175935]
Liu L X, Liu B, Huang H and Bovik A C. 2014. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8): 856-863 [DOI: 10.1016/j.image.2014.06.006]
Liu S Q, Liu S D, Cai W D, Che H Y, Pujol S, Kikinis R, Feng D G, Fulham M J and ADNI. 2015a. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer's disease. IEEE Transactions on Biomedical Engineering, 62(4): 1132-1140 [DOI: 10.1109/TBME.2014.2372011]
Liu S Q, Zhao J and Shi M Z. 2015b. Medical image fusion based on improved sum-modified-Laplacian. International Journal of Imaging Systems and Technology, 25(3): 206-212 [DOI: 10.1002/ima.22138]
Liu Y, Chen X, Cheng J and Peng H. 2017. A medical image fusion method based on convolutional neural networks//Proceedings of the 20th International Conference on Information Fusion. Xi'an, China: IEEE: 1-7 [DOI: 10.23919/ICIF.2017.8009769http://dx.doi.org/10.23919/ICIF.2017.8009769]
Liu Y, Chen X, Wang Z F, Wang Z J, Ward R K and Wang X S. 2018. Deep learning for pixel-level image fusion: recent advances and future prospects. Information Fusion, 42: 158-173 [DOI: 10.1016/j.inffus.2017.10.007]
Liu Z, Blasch E, Xue Z Y, Zhao J Y, Laganiere R and Wu W. 2012b. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1): 94-109 [DOI: 10.1109/TPAMI.2011.109]
Ma J Y, Yu W, Liang P W, Li C and Jiang J J. 2019. FusionGAN: a generative adversarial network for infrared and visible image fusion. Information Fusion, 48: 11-26 [DOI: 10.1016/j.inffus.2018.09.004]
Ma J Y, Xu H, Jiang J J, Mei X G and Zhang X P. 2020. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Transactions on Image Processing, 29: 4980-4995 [DOI: 10.1109/TIP.2020.2977573]
Martinez J, Pistonesi S, Maciel M C and Flesia A G. 2019. Multi-scale fidelity measure for image fusion quality assessment. Information Fusion, 50: 197-211 [DOI: 10.1016/j.inffus.2019.01.003]
Mathiyalagan P. 2018. Multi-modal medical image fusion using curvelet algorithm//Proceedings of 2018 International Conference on Advances in Computing, Communications and Informatics. Bangalore, India: IEEE: 2453-2458 [DOI: 10.1109/icacci.2018.8554759http://dx.doi.org/10.1109/icacci.2018.8554759]
Meher B, Agrawal S, Panda R and Abraham A. 2019. A survey on region based image fusion methods. Information Fusion, 48: 119-132 [DOI: 10.1016/j.inffus.2018.07.010]
Mittal A, Soundararajan R and Bovik A C. 2013. Making a "completely blind" image quality analyzer. IEEE Signal Processing Letters, 20(3): 209-212 [DOI: 10.1109/LSP.2012.227726]
Pajares G and de la Cruz J M. 2004. A wavelet-based image fusion tutorial. Pattern Recognition, 37(9): 1855-1872 [DOI: 10.1016/j.patcog.2004.03.010]
Prakash O, Park C M, Khare A, Jeon M and Gwak J. 2019. Multiscale fusion of multimodal medical images using lifting scheme based biorthogonal wavelet transform. Optik, 182: 995-1014 [DOI: 10.1016/j.ijleo.2018.12.028]
Rajalingam B and Priya D R. 2018a. Hybrid multimodality medical image fusion technique for feature enhancement in medical diagnosis. International Journal of Engineering Science Invention, 2: 52-60
Rajalingam B and Priya D R. 2018b. Review of multimodality medical image fusion using combined transform techniques for clinical application. International Journal of Scientific Research in Computer Science Applications and Management Studies, 7(3): #326913531
Rao Y J. 1997. In-fibre Bragg grating sensors. Measurement Science and Technology, 8(4): 355-375 [DOI: 10.1088/0957-0233/8/4/002]
Roberts J W, Aardt J A V and Ahmed F B. 2008. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2(1): #023522 [DOI: 10.1117/1.2945910]
Sengupta A, Seal A, Panigrahy C, Krejcar O and Yazidi A. 2020. Edge information based image fusion metrics using fractional order differentiation and sigmoidal functions. IEEE Access, 8: 88385-88398 [DOI: 10.1109/ACCESS.2020.2993607]
Shabanzade F and Ghassemian H. 2017. Combination of wavelet and contourlet transforms for PET and MRI image fusion//Proceedings of 2017 Artificial Intelligence and Signal Processing Conference. Shiraz, Iran: IEEE: 178-183 [DOI: 10.1109/AISP.2017.8324077http://dx.doi.org/10.1109/AISP.2017.8324077]
Shi B B, Chen Y N, Zhang P, Smith C D and Liu J D. 2017. Nonlinear feature transformation and deep fusion for Alzheimer′s disease staging analysis. Pattern Recognition, 63: 487-498 [DOI: 10.1016/j.patcog.2016.09.032]
Singh R and Khare A. 2014. Redundant discrete wavelet transform based medical image fusion//Advances in Signal Processing and Intelligent Recognition Systems. Switzerland: Springer: 505-515 [DOI: 10.1007/978-3-319-04960-1_44http://dx.doi.org/10.1007/978-3-319-04960-1_44]
Suk H I, Lee S W and Shen D G. 2014. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. NeuroImage, 101: 569-582 [DOI: 10.1016/j.neuroimage.2014.06.077]
Tang L, Tian C G, Qian J S and Li L D. 2018. No reference quality evaluation of medical image fusion. International Journal of Imaging Systems and Technology, 28(4): 267-273 [DOI: 10.1002/ima.22277]
Tang W, Liu Y, Cheng J, Li C and Chen X. 2021. Green fluorescent protein and phase contrast image fusion via detail preserving cross network. IEEE Transactions on Computational Imaging, 7: 584-597 [DOI: 10.1109/TCI.2021.3083965]
Tang W, Liu Y, Zhang C, Cheng J, Peng H and Chen X. 2019. Green fluorescent protein and phase-contrast image fusion via generative adversarial networks. Computational and Mathematical Methods in Medicine, 2019: #5450373 [DOI: 10.1155/2019/5450373]
Veshki F G, Ouzir N, Vorobyov S A and Ollila E. 2021. Coupled feature learning for multimodal medical image fusion [EB/OL]. [2022-06-03].https://arxiv.org/pdf/2102.08641.pdfhttps://arxiv.org/pdf/2102.08641.pdf
Vincent O R and Folorunso O. 2009. A descriptive algorithm for sobel image edge detection//Proceedings of 2009 Informing Science and IT Education Conference. Macon, USA: 97-107
Wang G F, Li W S, Gao X B, Xiao B and Du J. 2022a. Functional and anatomical image fusion based on gradient enhanced decomposition model. IEEE Transactions on Instrumentation and Measurement, 71: #2508714 [DOI: 10.1109/TIM.2022.3170983]
Wang G F, Li W S, Gao X B, Xiao B and Du J. 2022b. Multimodal medical image fusion based on multichannel coupled neural P systems and max-cloud models in spectral total variation domain. Neurocomputing, 480: 61-75 [DOI: 10.1016/j.neucom.2022.01.059]
Wang K P, Zheng M Y, Wei H Y, Qi G Q and Li Y Y. 2020. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors, 20(8): #2169 [DOI: 10.3390/s20082169]
Wang N, Zhang W and Li D. 2018. GIMI: a new evaluation index for 3D multimodal medical image fusion//Proceedings of 2018 International Conference on Computational Intelligence and Security. Hangzhou, China: IEEE: 25-29 [DOI: 10.1109/CIS2018.2018.00014http://dx.doi.org/10.1109/CIS2018.2018.00014]
Wang Q, Shen Y and Jin J. 2008. Performance evaluation of image fusion techniques//Image Fusion: Algorithms and Applications. Amsterdam: Academic Press: 469-492 [DOI: 10.1016/B978-0-12-372529-5.00017-2http://dx.doi.org/10.1016/B978-0-12-372529-5.00017-2]
Wang Q, Shen Y and Zhang J Q. 2005. A nonlinear correlation measure for multivariable data set. Physica D: Nonlinear Phenomena, 200(3/4): 287-295 [DOI: 10.1016/j.physd.2004.11.001]
Wang Z, Bovik A C, Sheikh H R and Simoncelli E P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612 [DOI: 10.1109/TIP.2003.819861]
Xia K J, Yin H S and Wang J Q. 2018. A novel improved deep convolutional neural network model for medical image fusion. Cluster Computing, 22: 1515-1527 [DOI: 10.1007/s10586-018-2026-1]
Xu H and Ma J Y. 2021. EMFusion: an unsupervised enhanced medical image fusion network. Information Fusion, 76: 177-186 [DOI: 10.1016/j.inffus.2021.06.001]
Xu H, Ma J Y, Jiang J J, Guo X J and Ling H B. 2022. U2Fusion: a unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 502-518 [DOI: 10.1109/TPAMI.2020.3012548]
Xu H, Ma J Y, Le Z L, Jiang J J and Guo X J. 2020. FusionDN: a unified densely connected network for image fusion. Proceedings of the AAAI Conference on Artificial Intelligence, 34(7): 12484-12491 [DOI: 10.1609/aaai.v34i07.6936]
Xydeas C S and Petrović V. 2000. Objective image fusion performance measure. Electronics Letters, 36(4): 308-309 [DOI: 10.1049/el:20000267]
Yeganeh H and Wang Z. 2013. Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing, 22(2): 657-667 [DOI: 10.1109/TIP.2012.2221725]
Zeiler M D, Krishnan D, Taylor G W and Fergus R. 2010. Deconvolutional networks//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE: 2528-2535 [DOI: 10.1109/CVPR.2010.5539957http://dx.doi.org/10.1109/CVPR.2010.5539957]
Zhang H, Xu H, Tian X, Jiang J J and Ma J Y. 2021. Image fusion meets deep learning: a survey and perspective. Information Fusion, 76: 323-336 [DOI: 10.1016/j.inffus.2021.06.008]
Zhang J J, Zhou T, Lu H L and Wang H Q. 2016. Research progressof multi-model medical image fusion at feature level. Journal of Biomedical Engineering, 33(2): 394-399
张俊杰, 周涛, 陆惠玲, 王惠群. 2016. 特征级多模态医学图像融合技术的研究与进展. 生物医学工程学杂志, 33(2): 394-399 [DOI: 10.7507/1001-5515.20160067]
Zhang L, Shen Y and Li H Y. 2014. VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing, 23(10): 4270-4281 [DOI: 10.1109/TIP.2014.2346028]
Zhang L, Zhang L, Mou X Q and Zhang D. 2011. FSIM: a feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8): 2378-2386 [DOI: 10.1109/TIP.2011.2109730]
Zhang Q, Liu Y, Blum R S, Han J G and Tao D C. 2018. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review. Information Fusion, 40: 57-75 [DOI: 10.1016/j.inffus.2017.05.006]
Zhang X and Chen W B. 2014. Medical image fusion based on weighted Contourlet transformation coefficients. Journal of Image and Graphics, 19(1): 133-140
张鑫, 陈伟斌. 2014. Contourlet变换系数加权的医学图像融合. 中国图象图形学报, 19(1): 133-140 [DOI: 10.11834/jig.20140117]
Zhang X C. 2021. Benchmarking and comparing multi-exposure image fusion algorithms. Information Fusion, 74: 111-131 [DOI: 10.1016/j.inffus.2021.02.005]
Zhang Y, Liu Y, Sun P, Yan H, Zhao X L and Zhang L. 2020. IFCNN: a general image fusion framework based on convolutional neural network. Information Fusion, 54: 99-118 [DOI: 10.1016/j.inffus.2019.07.011]
Zhao C, Wang T F and Lei B Y. 2021. Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Computing and Applications, 33(12): 6595-6610 [DOI: 10.1007/s00521-020-05421-5]
Zhao W D and Lu H C. 2017. Medical image fusion and denoising with alternating sequential filter and adaptive fractional order total variation. IEEE Transactions on Instrumentation and Measurement, 66(9): 2283-2294 [DOI: 10.1109/TIM.2017.2700198]
Zhou P, Xi R H, Song L L and Wu X D. 2006. Realization of medical image fusion based on wavelet transform. Journal of Image and Graphics, 11(11): 1720-1723
周朋, 奚日辉, 宋玲玲, 吴小丹. 2006. 基于小波变换的医学图像融合技术的实现. 中国图象图形学报, 11(11): 1720-1723 [DOI: 10.11834/jig.2006011301]
Zhou T, Liu S, Dong Y L, Huo B Q and Ma Z J. 2021. Research on pixel-level image fusion based on multi-scale transformation: progress application and challenges. Journal of Image and Graphics, 26(9): 2094-2110
周涛, 刘珊, 董雅丽, 霍兵强, 马宗军. 2021. 多尺度变换像素级医学图像融合: 研究进展、应用和挑战. 中国图象图形学报, 26(9): 2094-2110 [DOI: 10.11834/jig.200803]
相关作者
相关机构