多区域融合注意力网络模型下的核性白内障分类
Nuclear cataract classification based on multi-region fusion attention network model
- 2022年27卷第3期 页码:948-960
收稿日期:2021-08-20,
修回日期:2021-11-19,
录用日期:2021-11-26,
纸质出版日期:2022-03-16
DOI: 10.11834/jig.210735
移动端阅览
浏览全部资源
扫码关注微信
收稿日期:2021-08-20,
修回日期:2021-11-19,
录用日期:2021-11-26,
纸质出版日期:2022-03-16
移动端阅览
目的
2
核性白内障是主要致盲和导致视觉损害的眼科疾病,早期干预和白内障手术可以有效改善患者的视力和生活质量。眼前节光学相干断层成像图像(anterior segment optical coherence tomography,AS-OCT)能够非接触、客观和快速地获取白内障混浊信息。临床研究已经发现在AS-OCT图像中核性白内障严重程度与核性区域像素特征
如均值存在强相关性和高可重复性。但目前基于AS-OCT图像的自动核性白内障分类工作较少且分类结果还有较大提升空间。为此,本文提出一种新颖的多区域融合注意力网络(multi-region fusion attention network,MRA-Net)对AS-OCT图像中的核性白内障严重程度进行精准分类。
方法
2
在提出的多区域融合注意力模型中,本文设计了一个多区域融合注意力模块(multi-region fusion attention,MRA),对不同核性区域特征表示进行融合来增强分类结果;另外,本文验证了以人和眼为单位的AS-OCT图像数据集拆分方式对核性白内障分类结果的影响。
结果
2
在一个自建的AS-OCT图像数据集上结果表明,本文模型的总体分类准确率为87.78%,比对比方法至少提高了1%。在10种分类算法上的结果表明:以眼为单位的AS-OCT数据集优于以人为单位的AS-OCT数据集的分类结果,F1和Kappa评价指标分别最大提升了4.03%和8%。
结论
2
本文模型考虑了特征图不同区域特征分布的差异性,使核性白内障分类更加准确;不同数据集拆分方式的结果表明,考虑到同一个人两只眼的核性白内障严重程度相似,建议白内障的AS-OCT图像数据集拆分以人为单位。
Objective
2
Cataracts are the primary inducement for human blindness and vision impairment. Early intervention and cataract surgery can effectively improve the vision and life quality of cataract patients. Anterior segment optical coherence tomography (AS-OCT) image can capture cataract opacity information through a non-contact
objective
and fast manner. Compared with other ophthalmic images like fundus images
AS-OCT images are capable of capturing the clear nucleus region
which is very significant for nuclear cataract (NC) diagnosis. Clinical studies have identified that a strong opacity correlation relationship and high repeatability between average density value of the nucleus region and NC severity levels in AS-OCT images. Moreover
the clinical works also have suggested that the correlation relationships between different nucleus regions and NC severity levels. These original research works provide the clinical reference for automatic AS-OCT image-based NC classification. However
automatic NC classification based on AS-OCT images has been rarely studied
and there is much improvement room for NC classification performance on AS-OCT images.
Method
2
Motivated by the clinical research of NC
this paper proposes an efficient multi-region fusion attention network (MRA-Net) model by infusing clinical prior knowledge
aiming to classify nuclear cataract severity levels on AS-OCT images accurately. In the MRA-Net
we construct a multi-region fusion attention (MRA) block to fuse feature representation information from different nucleus regions to enhance the overall classification performance
in which we not only adopt the summation operation to fuse different region information but also apply the softmax function to focus on salient channel and suppress redundant channels. In respect of the residual connection can alleviate the gradient vanishing issue
the MRA block is plugged into a cluster of Residual-MRA modules to demonstrate MRA-Net. Moreover
we also test the impacts of two different dataset splitting methods on NC classification results: participant-based splitting method and eye-based splitting method
which is easily ignored by previous works. In the training
this paper resizes the original AS-OCT images into 224 × 224 pixels as the network inputs and set batch size to 16. Stochastic gradient descent (SGD) optimizer is used as the optimizer with default settings and we set training epochs to 100.
Result
2
Our research analysis demonstrates that the proposed MRA-Net achieves 87.78% accuracy and obtains 1% improvement than squeeze and excitation network (SENet) based on a clinical AS-OCT image dataset. We also conduct comparable experiments to verify that the summation operation works better the concatenation on the MRA block by using ResNet as the backbone network. The results of two dataset splitting methods also that ten classification methods like MRA-Net and SENet obtain better classification results on the eye-based dataset than the participant-based dataset
e.g.
the highest improvements on F1 and Kappa are 4.03% and 8% correspondingly.
Conclusion
2
Our MRA-Net considers the difference of feature distribution in different regions in a feature map and incorporates the clinical priors into network architecture design. MRA-Net obtains surpassing classification performance and outperforms advanced methods. The classification results of two dataset splitting methods on AS-OCT image dataset also indicated that given the similar nuclear cataract severity in the two eyes of the same participant. Thus
the AS-OCT image dataset is suggested to be split based on the participant level rather than the eye level
which ensures that each participant falls into the same training or testing datasets. Overall
our MRA-Net has the potential as a computer-aided diagnosis tool to assist clinicians in diagnosing cataract.
Caixinha M, Amaro J, Santos M, Perdigão F, Gomes M and Santos J. 2016. In-vivo automatic nuclear cataract detection and classification in an animal model by ultrasounds. IEEE Transactions on Biomedical Engineering, 63(11): 2326-2335 [DOI: 10.1109/TBME.2016.2527787]
Cao G P, Zhao W, Higashita R, Liu J, Chen W, Yuan J, Zhang Y B and Yang M. 2020a. An efficient lens structures segmentation method on AS-OCT images//Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Montréal, Canada: IEEE: 1646-1649 [ DOI: 10.1109/EMBC44109.2020.9175944 http://dx.doi.org/10.1109/EMBC44109.2020.9175944 ]
Cao L, Li H Q, Zhang Y J, Zhang L and Xu L. 2020b. Hierarchical method for cataract grading based on retinal images using improved Haar wavelet. Information Fusion, 53: 196-208 [DOI: 10.1016/j.inffus.2019.06.022]
Chen D, Li Z L, Huang J H, Yu L Q, Liu S J and Zhao Y E. 2019. Lens nuclear opacity quantitation with long-range swept-source optical coherence tomography: correlation to LOCS Ⅲ and a Scheimpflug imaging-based grading system. British Journal of Ophthalmology, 103(8): 1048-1053 [DOI: 10.1136/bjophthalmol-2018-312661]
Chylack L T Jr, Wolfe J K, Singer D M, Leske M C, Bullimore M A, Bailey I L, Friend J, McCarthy D and Wu S Y. 1993. The lens opacities classification system Ⅲ. The longitudinal study of cataract study group. Archives of Ophthalmology, 111(6): 831-836 [DOI: 10.1001/archopht.1993.01090060119035]
de Castro A, Benito A, Manzanera S, Mompeán J, Cañizares B,Martínez D, Marín J M, Grulkowski I and Artal P. 2018. Three-dimensional cataract crystalline lens imaging with swept-source optical coherence tomography. Investigative Ophthalmology and Visual Science, 59(2): 897-903 [DOI: 10.1167/iovs.17-23596]
Dos Santos V A, Schmetterer L, Stegmann H, Pfister M, Messner A, Schmidinger G, Garhofer G and Werkmeister R M. 2019. CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. Biomedical Optics Express, 10(2): 622-641 [DOI: 10.1364/BOE.10.000622]
Fu H Z, Baskaran M, Xu Y W, Lin S, Wong D W K, Liu J, Tun T A, Mahesh M, Perera S A and Aung T. 2019. A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. American Journal of Ophthalmology, 203: 37-45 [DOI: 10.1016/j.ajo.2019.02.028]
Fu H Z, Xu Y W, Lin S, Wong D W K, Mani B, Mahesh M, Aung T and Liu J. 2018. Multi-context deep network for angle-closure glaucoma screening in anterior segment OCT//Proceedings of the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention. Granada, Spain: Springer: 356-363 [ DOI: 10.1007/978-3-030-00934-2_40 http://dx.doi.org/10.1007/978-3-030-00934-2_40 ]
Gali H E, Sella R and Afshari N A. 2019. Cataract grading systems: a review of past and present. Current Opinion in Ophthalmology, 30(1): 13-18 [DOI: 10.1097/ICU.0000000000000542]
Gao X T, Lin S and Wong T Y. 2015. Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Transactions on Biomedical Engineering, 62(11): 2693-2701 [DOI: 10.1109/TBME.2015.2444389]
Hu J, Shen L and Sun G. 2018. Squeeze-and-excitation network//Proceedings of 2008 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 7132-7141 [ DOI: 10.1109/CVPR.2018.00745 http://dx.doi.org/10.1109/CVPR.2018.00745 ]
Huang W, Li H Q, Chan K L, Lim J H, Liu J and Wong T Y. 2009. A computer-aided diagnosis system of nuclear cataract via ranking//Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention. London, UK: Springer: 803-810 [ DOI: 10.1007/978-3-642-04271-3_97 http://dx.doi.org/10.1007/978-3-642-04271-3_97 ]
Keller B, Draelos M, Tang G, Farsiu S, Kuo A N, Hauser K and Izatt J A. 2018. Real-time corneal segmentation and 3D needle tracking in intrasurgical OCT. Biomedical Optics Express, 9(6): 2716-2732 [DOI: 10.1364/BOE.9.002716]
Li H Q, Lim J H, Liu J and Wong T Y. 2007. Towards automatic grading of nuclear cataract//Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Lyon, France: IEEE: 4961-4964 [ DOI: 10.1109/IEMBS.2007.4353454 http://dx.doi.org/10.1109/IEMBS.2007.4353454 ]
Li J Q, Zhang L L, Zhang L, Yang J J and Wang Q. 2018.Cataract recognition and grading based on deep learning. Academic Journal of Second Military Medical University, 39(8): 878-885
李建强, 张苓琳, 张莉, 杨吉江, 王青. 2018. 基于深度学习的白内障识别与分级. 第二军医大学学报, 39(8): 878-885 [DOI: 10.16781/j.0258-879x.2018.08.0878]
Li X, Wang W H, Hu X L and Yang J. 2019. Selective kernel networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 510-519 [ DOI: 10.1109/CVPR.2019.00060 http://dx.doi.org/10.1109/CVPR.2019.00060 ]
Long E P, Lin H T, Liu Z Z, Wu X H, Wang L M, Jiang J W, An Y Y, Lin Z L, Li X Y, Chen J J, Li J, Cao Q Z, Wang D N, Liu X Y, Chen W R and Liu Y Z. 2017. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nature Biomedical Engineering, 1: #0024 [DOI: 10.1038/s41551-016-0024]
Makhotkina N Y, Berendschot T T J M, Van Den Biggelaar F J H M, Weik A R H and Nuijts R M M A. 2018. Comparability of subjective and objective measurements of nuclear density in cataract patients. Acta Ophthalmologica, 96(4): 356-363 [DOI: 10.1111/aos.13694]
Ozgokce M, Batur M, Alpaslan M, Yavuz A, Batur A, Seven E and Arslan H. 2019. A comparative evaluation of cataract classifications based on shear-wave elastography and B-mode ultrasound findings. Journal of Ultrasound, 22(4): 447-452 [DOI: 10.1007/s40477-019-00400-6]
Tan M X and Le Q. 2019. Efficientnet: rethinking model scaling for convolutional neural networks//Proceedings of the 36th International Conference on Machine Learning. Long Beach, USA: PMLR: 6105-6114
Wang W, Zhang J Q, Gu X X, Ruan X T, Chen X Y, Tan X H, Jin G M, Wang L H, He M G, Congdon N, Liu Z Z, Luo L X and Liu Y Z. 2021. Objective quantification of lens nuclear opacities using swept-source anterior segment optical coherence tomography. British Journal of Ophthalmology [DOI: 10.1136/bjophthalmol-2020-318334]
Wong A L, Leung C K S, Weinreb R N, Cheng A K C, Cheung C Y L, Lam P T H, Pang C P and Lam D S C. 2009. Quantitative assessment of lens opacities with anterior segment optical coherence tomography. British Journal of Ophthalmology, 93(1): 61-65 [DOI: 10.1136/bjo.2008.137653]
Woo S, Park J, Lee J Y and Kweon I S. 2018. CBAM: convolutional block attention module//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 3-19 [ DOI: 10.1007/978-3-030-01234-2_1 http://dx.doi.org/10.1007/978-3-030-01234-2_1 ]
Xu C X, Zhu X J, He W W, Lu Y, He X X, Shang Z J, Wu J, Zhang K K, Zhang Y L, Rong X F, Zhao Z N, Cai L, Ding D Y and Li X R. 2019. Fully deep learning for slit-lamp photo based nuclear cataract grading//Proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention. Shenzhen, China: Springer: 513-521 [ DOI: https://doi.org/10.1007/978-3-030-32251-9_56 http://dx.doi.org/https://doi.org/10.1007/978-3-030-32251-9_56 ]
Xu X, Zhang L L, Li J Q, Guan Y and Zhang L. 2020. A hybrid global-local representation CNN model for automatic cataract grading. IEEE Journal of Biomedical and Health Informatics, 24(2): 556-567 [DOI: 10.1109/JBHI.2019.2914690]
Xu Y W, Gao X T, Lin S, Wong D W K, Liu J, Xu D, Cheng C Y, Cheung C Y and Wong T Y. 2013. Automatic grading of nuclear cataracts from slit-lamp lens images using group sparsity regression//Proceedings of the 16th International Conference on Medical Image Computing and Computer-Assisted Intervention. Nagoya, Japan: Springer: 468-475 [ DOI: 10.1007/978-3-642-40763-5_58 http://dx.doi.org/10.1007/978-3-642-40763-5_58 ]
Xu Y W, Duan L X, Wong D W K, Wong T Y and Liu J. 2016. Semantic reconstruction-based nuclear cataract grading from slit-lamp lens images//Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens, Greece: Springer: 458-466 [ DOI: 10.1007/978-3-319-46726-9_53 http://dx.doi.org/10.1007/978-3-319-46726-9_53 ]
Zhang XQ, Fang J S, Xiao Z J, Chen B, Higashita R, Chen W, Yuan J and Liu J. 2021. Research on classification algorithms of nuclear cataract based on anterior segment coherence tomography image [J/OL]. Computer Science: 1-10 [2021-07-20]
章晓庆, 方建生, 肖尊杰, 陈浜, Higashita R, 陈婉, 袁进, 刘江. 2021. 基于眼前节相干光断层扫描成像的核性白内障分类算法[J/OL]. 计算机科学: 1-10. http://kns.cnki.net/kcms/detail/50.1075.7p.2021/104.1611.002.html http://kns.cnki.net/kcms/detail/50.1075.7p.2021/104.1611.002.html
Zhang X Q, Xiao Z J, Higashita R, Chen W, Yuan J, Fang J S, Hu Y and Liu J. 2020. A novel deep learning method for nuclear cataract classification based on anterior segment optical coherence tomography images//Proceedings of 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Toronto, Canada: IEEE: 662-668 [ DOI: 10.1109/SMC42975.2020.9283218 http://dx.doi.org/10.1109/SMC42975.2020.9283218 ]
相关作者
相关机构