结合域适应学习的糖尿病视网膜病变分级诊断
Domain-adaptive-learning based diabetic retinopathy grading diagnosis
- 2022年27卷第11期 页码:3356-3370
纸质出版日期: 2022-11-16 ,
录用日期: 2021-12-08
DOI: 10.11834/jig.210411
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-11-16 ,
录用日期: 2021-12-08
移动端阅览
宋若仙, 曹鹏, 赵大哲. 结合域适应学习的糖尿病视网膜病变分级诊断[J]. 中国图象图形学报, 2022,27(11):3356-3370.
Ruoxian Song, Peng Cao, Dazhe Zhao. Domain-adaptive-learning based diabetic retinopathy grading diagnosis[J]. Journal of Image and Graphics, 2022,27(11):3356-3370.
目的
2
传统的糖尿病视网膜病变(糖网)(diabetic retinopathy,DR)依赖于早期病理特征的精确检测,但由于数据集缺乏病灶标记区域导致无法有效地建立监督性分类模型,引入其他辅助数据集又会出现跨域数据异质性问题;另外,现有的糖网诊断方法大多无法直观地从语义上解释医学模型预测的结果。基于此,本文提出一种端到端式结合域适应学习的糖网自动多分类方法,该方法协同注意力机制和弱监督学习加强优化。
方法
2
首先,利用已标记病灶区域的辅助数据训练病灶检测模型,再将目标域数据集的糖网诊断转化为弱监督学习问题,依靠多分类预测结果指导深度跨域生成对抗网络模型,提升跨域的样本图像质量,用于微调病灶检测模型,进而过滤目标域中一些无关的病灶样本,提升多分类分级诊断性能。最后,在整体模型中融合注意力机制,从医学病理诊断角度提供可解释性支持其分类决策。
结果
2
在公开数据集Messidor上进行糖网多分类评估实验,本文方法获得了71.2%的平均准确率和80.8%的AUC(area under curve)值,相比于其他多种方法具有很大优势,可以辅助医生进行临床眼底筛查。
结论
2
结合域适应学习的糖网分类方法在没有提供像素级病灶标注数据的情况下,只需要图像级监督信息就可以高效自动地对眼底图像实现分级诊断,从而避免医学图像中手工提取病灶特征的局限性和因疲劳可能造成漏诊或误诊问题,另外,为医生提供了与病理学相关的分类依据,获得了较好的分类效果。
Objective
2
High-incidence diabetic retinopathy (DR) is derived from a diabetic-complication in common. Recent algorithms for DR screening on fundus images are designed to alleviate the issues of uneven distribution of disease and high population density. Traditional DR diagnose is originated from an early-pathological detection
micro-aneurysms (MA) and hemorrhage (H). However
the supervised classification model cannot be effectively trained due to the lack of the lesion labeling
and such medical-oriented pixel-level annotation is time-consuming and labor-intensive. For annotated-lesion-regions-related auxiliary dataset
it is difficult to improve the classification model due to the domain gap. In addition
most of the existing DR diagnostic methods cannot be used to explain the predicted results of medical models. We demonstrate an end-to-end automatic grading algorithm of domain adaptive-learning-based DR
integrated weakly-supervised learning and attention mechanism.
Method
2
First
the auxiliary dataset of the labeled lesion area is transferred to train a lesion detection supervision model due for handling the constraints of image-level DR diagnosis label and pixel-level lesion location information. Next
a single-image-labeling DR grading model is considered as a weakly-supervised learning problem to be dealt with. To bridge the domain gap
we facilitate a deep cross-domain generative adversarial network (GAN) model to produce more qualified cross-domain patches. A patches-derived classification model is trained by fine-tuning to filter out irrelevant lesion samples in the target domain
which improves the performance of image-label-based multi-class diagnosis. Finally
attention mechanism is melted into the entire model to strengthen grading interpretability for pathological diagnosis. As a result
the model hypothesis is based on independently and identically distributed local ones in global samples. The local-global relationship between small lesions and completed image is established
and the tracing ability of unclear lesion area is beneficial to classification results of retinal images for the degree of DR (healthy
slighted
moderated and severed).
Result
2
A publicly dataset of Messidor is composed of 1 200 fundus images
which provides image-level diagnostic status of DR severity. Meanwhile
to identify normal/abnormal lesions
IDRiD dataset is targeted as source dataset to develop H + MA (MA and H) binary-class task. The experimental results illustrate our end-to-end framework contributions are shown as following: 1) improve the disease grading ability on the target domain without the lesion labels; 2) achieve domain adaptation across multiple datasets; 3) highlight the unclear regions with attention mechanism. Compared to the challenging benchmark dataset of Messidor
our optimization is based on the accuracy of 71.2% and the AUC(area under curve) value of 80.8%. We evaluate the contributions of different modules
such as sample filtering
domain adaptation
attention-mechanism-based weakly-supervised DR grading. Our ablation results show that the AUC value of these modules is optimized by 11.8%
20.2% and 15.8%
respectively. It can be sorted out that irrelevant samples filtering can reduce negative impact on the final results
generate GAN-based cross-domain samples
and optimize data heterogeneity. It promotes pathology-related interpretability and enhances the generalization ability of model. Moreover
the ablation experiments analyze the influence of hyper-parameters in detail and the DR-grading-oriented interpretability is visualized.
Conclusion
2
Our domain-adaptive-learning based classification method can achieve grading diagnosis of fundus images effectively in the context of the initial lesion detection stage
the transfer learning strategy
as well as the local-global mapping relationship of lesion and entire retinal image. It has potential to distinguish the severity of lesions types and rediscover subtle modifications towards pathological features
and deal with imbalance between lesion and background patches. The image-level monitoring information can effectively and automatically realize grading diagnosis of fundus images without pixel-level lesion annotation data. It can avoid the limitation of manual segmentation and labeling of lesion in medical images as well. Furthermore
its interpretation and support can provide the potential detection of high risk regions. Our model can harness more weakly-supervised classification of medical images further.
糖尿病视网膜病变(DR)眼底图像注意力机制深度学习弱监督学习域适应
diabetic retinopathy(DR)fundus imageattention mechanismdeep learningweakly-supervised learningdomain adaptation
Alzami F, Abdussalam, Megantara R A, Fanani A Z and Purwanto. 2019. Diabetic retinopathy grade classification based on fractal analysis and random forest//Proceedings of 2019 International Seminar on Application for Technology of Information and Communication (iSemantic). Semarang, Indonesia: IEEE: 272-276[DOI: 10.1109/ISEMANTIC.2019.8884217http://dx.doi.org/10.1109/ISEMANTIC.2019.8884217]
Cao W, Shan J, Czarnek N and Li L. 2017. Microaneurysm detection in fundus images using small image patches and machine learning methods//Proceedings of 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Kansas City, USA: IEEE: 325-331[DOI: 10.1109/BIBM.2017.8217671http://dx.doi.org/10.1109/BIBM.2017.8217671]
Chen F J, Zhu F, Wu Q X, Hao Y M, Wang E D and Cui Y G. 2021. A survey about image generation with generative adversarial nets. Chinese Journal of Computers, 44(2): 347-369
陈佛计, 朱枫, 吴清潇, 郝颖明, 王恩德, 崔芸阁. 2021. 生成对抗网络及其在图像生成中的应用研究综述. 计算机学报, 44(2): 347-369[DOI: 10.11897/SP.J.1016.2021.00347]
Cheplygina V, de Bruijne M and Pluim J P W. 2019. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Medical Image Analysis, 54: 280-296[DOI: 10.1016/j.media.2019.03.009]
Crossland L, Askew D, Ware R, Cranstoun P, Mitchell P, Bryett A and Jackson C. 2016. Diabetic retinopathy screening and monitoring of early stage disease in Australian general practice: tackling preventable blindness within a chronic care model. Journal of Diabetes Research, 2016: #8405395[DOI: 10.1155/2016/8405395]
Decencière E, Zhang X W, Cazuguel G, Lay B, Cochener B, Trone C, Gain P, Ordonez R, Massin P, Erginay A, Charton B and Klein J C. 2014. Feedback on a publicly distributed image database: the Messidor database. Image Analysis and Stereology, 33(3): 231-234[DOI: 10.5566/ias.1155]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial networks[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1406.2661.pdfhttps://arxiv.org/pdf/1406.2661.pdf
Isola P, Zhu J Y, Zhou T H and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 5967-5976[DOI: 10.1109/CVPR.2017.632http://dx.doi.org/10.1109/CVPR.2017.632]
Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J and Aila T. 2020. Analyzing and improving the image quality of StyleGAN//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 8107-8116[DOI: 10.1109/CVPR42600.2020.00813http://dx.doi.org/10.1109/CVPR42600.2020.00813]
Khan M A, Balgi A P, Chaithra C and Pramod Kumar S. 2020. Diabetic retinopathy detection by image processing algorithms and machine learning technique. JNNCE Journal of Engineering and Management, 4(1): #8[DOI: 10.37314/JJEM.2020.040102]
Khatun A and Hossain S G S. 2019. Early detection of diabetic retinopathy and severity scale measurement: a progressive review and scopes[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1912.12829.pdfhttps://arxiv.org/pdf/1912.12829.pdf
Labhade J D, Chouthmol L K and Deshmukh S. 2016. Diabetic retinopathy detection using soft computing techniques//Proceedings of 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT). Pune, India: IEEE: 175-178[DOI: 10.1109/ICACDOT.2016.7877573http://dx.doi.org/10.1109/ICACDOT.2016.7877573]
Larsen A B L, Sønderby S K, Larochelle H and Winther O. 2016. Autoencoding beyond pixels using a learned similarity metric[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1512.09300.pdfhttps://arxiv.org/pdf/1512.09300.pdf
Li C and Wand M. 2016. Precomputed real-time texture synthesis with markovian generative adversarial networks//Proceedings of the 14th European Conference on Computer Vision (ECCV). Amsterdam, the Netherlands: Springer: 702-716[DOI: 10.1007/978-3-319-46487-9_43http://dx.doi.org/10.1007/978-3-319-46487-9_43]
Li X M, Hu X W, Yu L Q, Zhu L, Fu C W and Heng P A. 2020. CANet: cross-disease attention network for joint diabetic retinopathy and diabetic macular edema grading. IEEE Transactions on Medical Imaging, 39(5): 1483-1493[DOI: 10.1109/TMI.2019.2951844]
Lucic M, Kurach K, Michalski M, Gelly S and Bousquet O. 2018. Are GANs created equal? A large-scale study[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1711.10337.pdfhttps://arxiv.org/pdf/1711.10337.pdf
Luo L, Xue D Y and Feng X L. 2020. Automatic diabetic retinopathy grading via self-knowledge distillation. Electronics, 9(9): #1337[DOI: 10.3390/electronics9091337]
Mnih V, Heess N, Graves A and Kavukcuoglu K. 2014. Recurrent models of visual attention[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1406.6247.pdfhttps://arxiv.org/pdf/1406.6247.pdf
Porwal P, Pachade S, Kamble R, Kokare M, Deshmukh G, Sahasrabuddhe V and Meriaudeau F. 2018. Indian diabetic retinopathy image dataset (IDRiD): a database for diabetic retinopathy screening research. Data, 3(3): #25[DOI: 10.3390/data3030025]
Radford A, Metz L and Chintala S. 2016. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1511.06434.pdfhttps://arxiv.org/pdf/1511.06434.pdf
Saeedi P, Petersohn I, Salpea P, Malanda B, Karuranga S, Unwin N, Colagiuri S, Guariguata L, Motala A A, Ogurtsova K, Shaw J E, Bright D and Williams R. 2019. Global and regional diabetes prevalence estimates for 2019 and projections for 2030 and 2045: results from the international diabetes federation diabetes atlas, 9th edition. Diabetes Research and Clinical Practice, 157: #107843[DOI: 10.1016/j.diabres.2019.107843]
Seoud L, Chelbi J and Cheriet F. 2015. Automatic grading of diabetic retinopathy on a public database//Proceedings of the Ophthalmic Medical Image Analysis International Workshop (OMIA). Munich, Germany: [s. n.]: 97-104[DOI: 10.17077/omia.1032http://dx.doi.org/10.17077/omia.1032]
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2021-05-26].https://arxiv.org/pdf/1409.1556.pdfhttps://arxiv.org/pdf/1409.1556.pdf
Tan C Q, Sun F C, Kong T, Zhang W C, Yang C and Liu C F. 2018. A survey on deep transfer learning//Proceedings of the 27th International Conference on Artificial Neural Networks. Rhodes, Greece: Springer: 270-279[DOI: 10.1007/978-3-030-01424-7_27http://dx.doi.org/10.1007/978-3-030-01424-7_27]
Tilahun M, Gobena T, Dereje D, Welde M and Yideg G. 2020. Prevalence of diabetic retinopathy and its associated factors among diabetic patients at Debre Markos referral hospital, Northwest Ethiopia, 2019: hospital-based cross-sectional study. Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy, 13: 2179-2187[DOI: 10.2147/DMSO.S260694]
Voets M, Møllersen K and Bongo L A. 2019. Reproduction study using public data of: development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. PLoS One, 14(6): #0217541[DOI: 10.1371/journal.pone.0217541]
Wang X G, Yan Y L, Tang P, Bai X and Liu W Y. 2018. Revisiting multiple instance neural networks. Pattern Recognition, 74: 15-24[DOI: 10.1016/j.patcog.2017.08.026]
Wang Z, Yin Y X, Shi J P, Fang W, Li H S and Wang X G. 2017. Zoom-in-net: deep mining lesions for diabetic retinopathy detection//Proceedings of the 20th International Conference on Medical Image Computing and Computer-AssistedIntervention. Quebec City, Canada: Springer: 267-275[DOI: 10.1007/978-3-319-66179-7_31http://dx.doi.org/10.1007/978-3-319-66179-7_31]
Yang Z C, Yang D Y, Dyer C, He X D, Smola A and Hovy E. 2016. Hierarchical attention networks for document classification//Proceedings of 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, USA: Association for Computational Linguistics: 1480-1489[DOI: 10.18653/v1/N16-1174http://dx.doi.org/10.18653/v1/N16-1174]
Zago G T, Andreão R V, Dorizzi B and Salles E O T. 2020. Diabetic retinopathy detection using red lesion localization and convolutional neural networks. Computers in Biology and Medicine, 116: #103537[DOI: 10.1016/j.compbiomed.2019.103537]
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 2242-2251[DOI: 10.1109/ICCV.2017.244http://dx.doi.org/10.1109/ICCV.2017.244]
相关作者
相关机构