肺部肿瘤跨模态图像融合的并行分解自适应融合模型
Parallel decomposition adaptive fusion model: cross-modal image fusion of lung tumors
- 2023年28卷第1期 页码:221-233
收稿日期:2021-10-19,
修回日期:2022-05-24,
录用日期:2022-5-31,
纸质出版日期:2023-01-16
DOI: 10.11834/jig.210988
移动端阅览
浏览全部资源
扫码关注微信
收稿日期:2021-10-19,
修回日期:2022-05-24,
录用日期:2022-5-31,
纸质出版日期:2023-01-16
移动端阅览
目的
2
跨模态像素级医学图像融合是精准医疗领域的研究热点。针对传统的像素级图像融合算法存在融合图像对比度不高和边缘细节不能较好保留等问题,本文提出并行分解图像自适应融合模型。
方法
2
首先,使用NSCT(non-subsampled contourlet transform)提取原图像的细节方向信息,将原图像分为低频子带和高频子带,同时使用潜在低秩表示方法(latent low-rank representation
LatLRR)提取原图像的显著能量信息,得到低秩部分、显著部分和噪声部分。然后,在低频子带融合方面,NSCT分解后得到的低频子带包含原图像的主要能量,在融合过程中存在多对一的模糊映射关系,因此低频子带融合规则采用基于模糊逻辑的自适应方法,使用高斯隶属函数表示图像模糊关系;在高频子带融合方面,NSCT分解后得到高频子带系数间有较强的结构相似性,高频子带包含图像的轮廓边缘信息,因此高频子带采用基于Piella框架的自适应融合方法,引入平均结构相似性作为匹配测度,区域方差作为活性测度,设计自适应加权决策因子对高频子带进行融合。
结果
2
在5组CT(computed tomography)肺窗/PET(positron emission tomography)和5组CT纵膈窗/PET进行测试,与对比方法相比,本文方法融合图像的平均梯度提升了66.6%,边缘强度提升了64.4%,边缘保持度提升了52.7%,空间频率提升了47.3%。
结论
2
本文方法生成的融合图像在主客观评价指标中均取得了较好的结果,有助于辅助医生进行更快速和更精准的诊疗。
Objective
2
Cross-modal medical image fusion has been developed as a key aspect for precision diagnosis. Thanks to the development of precision technology
positron emission tomography (PET) and computed tomography (CT) can be mainly used for lung tumors detection. The high-resolution of CT images are beneficial to bone tissues diagnosis
but the imaging effect of the lesions is poor
especially the information of tumors-infiltrated cannot be displayed clearly. The PET images can show soft tissues clearly
but the imaging effect of the bone tissues is weakened. Medical images fusion technology can integrate the anatomical and functional information of lesions region and locate lung tumors more accurately. To resolve the problem of low contrast and poor edge detail retention in traditional pixel-level image fusion
the paper develop a decomposition-paralleled adaptive fusion model.
Method
2
1) Use non-subsampled contourlet transform (NSCT) to extract multi-directional details information
2) use latent low-rank representation (LatLRR) to extract key feature information. The following four aspects are taken into consideration in terms of low frequency sub-bands fusion: first
image fusion is a mapping process of gray value many-to-one
and there is uncertainty existing in the mapping process. Next
the noise in the image is caused by artificial respiration
blood flow
and the overlap between organs and tissues. The noise confuses the contour features in the image and magnifies the ambiguity of the image. Third
the original image is decomposed by NSCT to obtain low-frequency and high-frequency sub-bands. The low-frequency sub-band retains the main energy information like the contour and background of the original image
and the uncertain relationship of the mapping is also retained. Therefore
it is required to design reasonable fusion rules to deal with the mapping relationship. Fourth
fuzzy set theory based fusion rules represent the whole image with a fuzzy matrix and a certain algorithm is used to solve the fuzzy matrix
which solves the fuzzy problem effectively in the image fusion process. The Gaussian membership function can apparently describe the low-frequency sub-band contextual fuzzy information. Therefore
the Gaussian membership function is used as the adaptive weighting coefficient of the low-frequency sub-band
and the fuzzy logic-based adaptive weighting fusion rule is adopted. The fusion rules of the high-frequency sub-bands are considered in the following aspects as well. First
considering that the high-frequency sub-bands contain the contours and edge details of the tissues and organs of the original image
they have structural similarity
and there are strong coefficients between coefficients. Second
structural similarity index measure (SSIM) is a measure of the similarity between two images
which reflects the correlation better between high-frequency sub-band coefficients. Therefore
the averaged structural similarity index is used to measure the coefficient correlation between the two high-frequency sub-bands. Third
the range of the lesion region of lung tumors is no more than one hundred pixels in common. The region-based fusion rules can more complete the characteristics of the lesion region. The regional variance can represent the variation degree of gray value in local regions. The larger of the variance is
the richer of the information reflecting the details of the image are. Therefore
the region variance is selected as the basis for calculating the image activity. The high frequency sub-bands are composed of contour edge information of the image. Therefore
the high frequency sub-bands use the fusion in terms of the Piella framework. In this method
the averaged structure similarity is introduced as the matching method
and the regional variance is used as the activity method
and an adaptive weighting decision factor is designed to fuse the high frequency sub-bands. Finally
the effectiveness of our algorithm is verified via comparative experiments.
Result
2
The paper focus on five groups of CT pulmonary window/PET and the five groups of CT mediastinal window/PET are tested. The experiment of compressed sensing-integrated NSCT is carried out and six objective evaluation indexes are selected to evaluate the quality of fused images. The experimental results show that our average gradient
edge intensity and spatial frequency of fused images are improved by 66.6%
64.4% and 80.3%
respectively.
Conclusion
2
To assist quick-response clinical activity and accurate diagnosis and treatment more
our research method is potential to improve the contrastive result of fused images and retain edge details effectively.
Bondžulić B and Petrović V. 2008. Objective image fusion performance measures. Military Technical Courier, 56(2): 181-193 [DOI: 10.5937/vojtehg0802181B]
Cai H Y, Zhuo L R, Zhu P, Huang Z H and Wu X Y. 2018. Fusion of infrared and visible images based on non-subsampled contourlet transform and intuitionistic fuzzy set. Acta Photonica Sinica, 47(6): #0610002
蔡怀宇, 卓励然, 朱攀, 黄战华, 武晓宇. 2018. 基于非下采样轮廓波变换和直觉模糊集的红外与可见光图像融合. 光子学报, 47(6): #0610002 [DOI: 10.3788/gzxb20184706.0610002]
da Cunha A L, Zhou J and Do M N. 2006. The nonsubsampled contourlet transform: theory, design, and applications. IEEE Transactions on Image Processing, 15(10): 3089-3101 [DOI: 10.1109/TIP.2006.877507]
Jing Z L, Xiao G and Li Z H. 2007. Image Fusion: Theory and Applications. Beijing: Higher Education Press
敬忠良, 肖刚, 李振华. 2007. 图像融合——理论与应用. 北京: 高等教育出版社
Khan M A, Rubab S, Kashif A, Sharif M I, Muhammad N, Shah J H, Zhang Y D and Satapathy S C. 2020. Lungs cancer classification from CT images: an integrated design of contrast based classical features fusion and selection. Pattern Recognition Letters, 129: 77-85 [DOI: 10.1016/j.patrec.2019.11.014]
Li H and Wu X J. 2018. Multi-focus image fusion using dictionary learning and low-rank representation[EB/OL ] . [2021-12-18 ] . https://arxiv.org/pdf/1804.08355.pdf https://arxiv.org/pdf/1804.08355.pdf
Li H and Wu X J. 2022. Infrared and visible image fusion using latent low-rank representation[EB/OL ] . [2022-01-29 ] . https://arxiv.org/pdf/1804.08992.pdf https://arxiv.org/pdf/1804.08992.pdf
Li Y, Zhao J L, Lv Z H and Li J H. 2021. Medical image fusion method by deep learning. International Journal of Cognitive Computing in Engineering, 2: 21-29 [DOI: 10.1016/j.ijcce.2020.12.004]
Liu G C and Yan S C. 2011. Latent low-rank representation for subspace segmentation and feature extraction//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 1615-1622 [ DOI: 10.1109/ICCV.2011.6126422 http://dx.doi.org/10.1109/ICCV.2011.6126422 ]
Liu J S and Jiang W. 2018. Improved image fusion algorithm based on nonsubsampled Contourlet transform. Journal of Computer Applications, 38(S1): 194-197
刘卷舒, 蒋伟. 2018. 改进的基于非下采样的Contourlet变换的图像融合算法. 计算机应用, 38(S1): 194-197
Liu X B, Mei W B and Du H Q. 2018. Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform. Biomedical Signal Processing and Control, 40: 343-350 [DOI: 10.1016/j.bspc.2017.10.001]
Lu H L, Zhou T, Wang H Q, Xia Y and Shi H B. 2017. PET/CT image pixel-level fusion algorithm based on NSCT and compressed sensing. Journal of Graphics, 38(6): 887-895
陆惠玲, 周涛, 王惠群, 夏勇, 师宏斌. 2017. 基于非下采样轮廓波变换和压缩感知的PET/CT像素级融合算法. 图学学报, 38(6): 887-895 [DOI: 10.11996/JG.j.2095-302X.2017060887]
Piella G. 2003. A general framework for multiresolution image fusion: from pixels to regions. Information Fusion, 4(4): 259-280 [DOI: 10.1016/S1566-2535(03)00046-0]
Polinati S and Dhuli R. 2020. Multimodal medical image fusion using Empirical wavelet decomposition and local energy maxima. Optik, 205: #163947 [DOI: 10.1016/j.ijleo.2019.163947]
Sung H, Ferlay J, Siegel R L, Laversanne M, Soerjomataram I, Jemal A and Bray F. 2021. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 71(3): 209-249 [DOI: 10.3322/caac.21660]
Wang H Q, Zhou T, Lu H L, Xia Y and Wang W W. 2016. Lung cancer PET/CT image fusion based on multiresolution transform and compressive sensing. Video Engineering, 40(3): 11-16
王惠群, 周涛, 陆惠玲, 夏勇, 王文文. 2016. 基于多分辨率变换和压缩感知的PET/CT融合方法. 电视技术, 40(3): 11-16 [DOI: 10.16280/j.vide0e.2016.03.003]
Wang Y, Yang Y C, Dang J W and Wang Y P. 2019. Image fusion based on fuzzy logic combined with adaptive pulse coupled neural network in nonsubsampled Contourlet transform domain. Laser and Optoelectronics Progress, 56(10): #101006
王艳, 杨艳春, 党建武, 王阳萍. 2019. 非下采样Contourlet变换域内结合模糊逻辑和自适应脉冲耦合神经网络的图像融合. 激光与光电子学进展, 56(10): #101006 [DOI: 10.3788/LOP56.101006]
Wang Z, Bovik A C, Sheikh H R and Simoncelli E P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612 [DOI: 10.1109/TIP.2003.819861]
Wei X Y. 2015. Pixel level Fusion Research Based on Non-Small Cell Lung Cancer PET/CT Images. Yinchuan: Ningxia Medical University
魏兴瑜. 2015. 非小细胞肺癌PET/CT图像像素级融合研究. 银川: 宁夏医科大学
Zhang Y D, Dong Z C, Wang S H, Yu X, Yao X J, Zhou Q H, Hu H, Li M, Jiménez-Mesa C, Ramirez J, Martinez F J and Gorriz J M. 2020. Advances in multimodal data fusion in neuroimaging: overview, challenges, and novel orientation. Information Fusion, 64: 149-187 [DOI: 10.1016/j.inffus.2020.07.006]
Zhang Z and Blum R S. 1999. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, 87(8): 1315-1326 [DOI: 10.1109/5.775414]
Zhou T, Liu S, Dong Y L, Huo B Q and Ma Z J. 2021. Research on pixel-level image fusion based on multi-scale transformation: progress application and challenges. Journal of Image and Graphics, 26(9): 2094-2110
周涛, 刘珊, 董雅丽, 霍兵强, 马宗军. 2021. 多尺度变换像素级医学图像融合: 研究进展、应用和挑战. 中国图象图形学报, 26(9): 2094-2110 [DOI: 10.11834/jig.200803]
Zhu Z Q, Chai Y, Yin H P, Li Y X and Liu Z D. 2016. A novel dictionary learning approach for multi-modality medical image fusion. Neurocomputing, 214: 471-482 [DOI: 10.1016/j.neucom.2016.06.036]
Zhu Z Q, Zheng M Y, Qi G Q, Wang D and Xiang Y. 2019. A phase congruency and local laplacian energy based multi-modality medical Image fusion method in NSCT domain. IEEE Access, 7: 20811-20824 [DOI: 10.1109/ACCESS.2019.2898111]
相关作者
相关机构