Current Issue Cover
融合隐向量对齐和Swin Transformer的OCTA血管分割

许聪1,2, 郝华颖2, 王阳3, 马煜辉2, 阎岐峰2, 陈浜2, 马韶东2, 王效贵1, 赵一天2(1.浙江工业大学机械工程学院, 杭州 310000;2.中国科学院宁波材料技术与工程研究所慈溪生物医学工程研究所, 宁波 315201;3.中国科学院空天信息创新研究院, 北京 100094)

摘 要
目的 光学相干断层扫描血管造影(optical coherence tomography angiography,OCTA)是一种非侵入式的新兴技术,越来越多地应用于视网膜血管成像。与传统眼底彩照相比,OCTA 技术能够显示黄斑周围的微血管信息,在视网膜血管成像邻域具有显著优势。临床实践中,医生可以通过 OCTA 图像观察不同层的血管结构,并通过分析血管结构的变化来判断是否存在相关疾病。大量研究表明,血管结构的任何异常变化通常都意味着存在某种眼科疾病。因此,对 OCTA 图像中的视网膜血管结构进行自动分割提取,对众多眼部相关疾病量化分析和临床决策具有重大意义。然而,OCTA 图像存在视网膜血管结构复杂、图像整体对比度低等问题,给自动分割带来极大挑战。为此,提出了一种新颖的融合隐向量对齐和 Swin Transformer 的视网膜血管结构的分割方法,能够实现血管结构的精准分割。方法 以 ResU-Net 为主干网络,通过 Swin Transformer 编码器获取丰富的血管特征信息。此外,设计了一种基于隐向量的特征对齐损失函数,能够在隐空间层次对网络进行优化,提升分割性能。结果 在 3 个 OCTA 图像数据集上的实验结果表明,本文方法的 AUC(area under curce)分别为 94.15%,94.87% 和 97.63%,ACC(accuracy)分别为 91.57%,90.03% 和 91.06%,领先其他对比方法,并且整体分割性能达到最佳。结论 本文提出的视网膜血管分割网络,在 3 个 OCTA 图像数据集上均取得了最佳的分割性能,优于对比方法。
Vessel segmentation of OCTA images based on latent vector alignment and swin Transformer

Xu Cong1,2, Hao Huaying2, Wang Yang3, Ma Yuhui2, Yan Qifeng2, Chen Bang2, Ma Shaodong2, Wang Xiaogui1, Zhao Yitian2(1.College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310000, China;2.Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology & Engineering, Chinese Academy of Sciences, Ningbo 315201, China;3.Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China)

Objective Optical coherence tomography angiography(OCTA)is a noninvasive, emerging technique that has been increasingly used for images of the retinal vasculature at the capillary-level resolution.OCTA technology can demonstrate the microvascular information around the macula and has significant remarkable advantages in retinal vascular imaging.Fundus fluorescence angiography can visualize the retinal vascular system, including capillaries.However, the technique requires intravenous injection of contrast.This process is relatively time-consuming and may have serious side effects.In clinical practice, doctors can look at different layers of vascular structures through OCTA images and analyze changes in vascular structures to determine the presence of related diseases.In particular, any abnormality in the microvasculature distributed in the macula often indicates the presence of some diseases, such as early-stage glaucomatous optic neuropathy, diabetic retinopathy, and age-related macular degeneration.Therefore, the automatic segmentation and extraction of retinal vascular structure in OCTA are vital for the quantitative analysis and clinical decision-making of many ocular diseases.However, the OCTA imaging process usually produces images with a low signal-to-noise ratio, thereby posing a great challenge for the automatic segmentation of vascular structures.Moreover, variations in vessel appearance, motion, and shadowing artifacts in different depth layers and underlying pathological structures significantly remarkably increase the difficulty in accurately segmenting retinal vessels.Therefore, this study proposes a novel segmentation method of retinal vascular structures by fusing hidden vector alignment and Swin Transformer to achieve the accurate segmentation of vascular structures.Method In this study, the ResU-Net network is used as the base network(the encoder and decoder layers consist of residual blocks and pooling layers), and the Swin Transformer is introduced into ResU-Net to form a new encoder structure.The encoding step of the feature encoder consists of four stages.Each stage comprises two layers:the Transformer layer consisting of several Swin Transformer blocks stacked together and the residual structure.The Swin Transformer encoder can acquire rich feature information, whereas the feature maps output from each Swin Transformer layer is combined with the feature maps sampled on the decoder via a jump connection.A feature alignment loss function based on hidden vectors is also designed in this study.This feature alignment loss function is different from the classical pixel-level loss function.Feature alignment loss can optimize segmentation results in terms of feature dimensions.It can also enhance the encoder's ability to extract the structural features of OCTA image vessels and optimize the network at the hidden space level by constraining the consistency of labels and images in the hidden space to improve the segmentation performance.Result Experimental results on three OCTA datasets(including two public datasets and one private dataset) show that our method is ahead of other comparative methods and has the best overall segmentation performance.In particular, the area under the curves(AUCs)of this method reaches 94.15%, 94.87%, and 97.63%, whereas the accuracy (ACCs)reaches 91.57%, 90.03%, and 91.06%, respectively.Compared with the classical medical image segmentation network U-Net, the proposed method improves the AUC, Kappa, false discovery rate(FDR), and Dice by approximately 4.06%, 10.18%, 23.16%, and 7.87%, respectively, on the OCTA-O dataset.In addition, ablation experiments are conducted for each component in this study to verify the validity of each component of the proposed model.The results show that each component can play a positive role.Conclusion An end-to-end vascular segmentation network is proposed in this study to address the challenges of complex retinal vascular structures and low overall image contrast present in OCTA.In this study, ResU-Net is used as the backbone network to mitigate the interference of scattering noise and artifacts on segmentation through image multifusion input.Moreover, the Swin Transformer module is used as the coding structure to obtain rich features.A novel hidden vector alignment loss function that can optimize the network at the hidden space level is also designed in this study.Thus, the gap between segmentation results and labels is reduced, and the segmentation performance is improved.The experimental results demonstrate that the method in this study achieves the best segmentation performance on all three OCTA datasets, and it outperforms other comparative methods.