目的 胰腺的准确分割是胰腺癌识别和分析的重要前提。现有基于深度学习的主流胰腺分割网络大多是编码—解码结构，对特征图采用先降低再增加分辨率的方式，严重丢失了胰腺位置和细节信息，导致分割效果不佳。针对上述问题，提出了基于3D路径聚合高分辨率网络的胰腺分割方法。方法 首先，为了捕获更多3D特征上下文信息，将高分辨率网络中的2D运算拓展为3D运算；其次，提出全分辨特征路径聚合模块，利用连续非线性变换缩小全分辨率输入图像与分割头网络输出特征语义差异的同时，减少茎网络下采样丢失的位置和细节信息对分割结果的影响；最后，提出多尺度特征路径聚合模块，利用渐进自适应特征压缩融合方式，避免低分辨率特征通道过度压缩导致的信息内容损失。结果 在公开胰腺数据集上，提出方法在Dice系数（Dice similarity coefficient，DSC）、Jaccard系数（Jaccard index，JI）、精确率（precision）和召回率（recall）上相比3D高分辨率网络（3D high-resolution net，3DHRNet）分别提升了1.41%、2.09%、2.35%和0.49%，相比具有代表性编码—解码结构的胰腺分割方法，取得了更高的分割精度。结论 本文提出的3D路径聚合高分辨率网络（3D pathaggregation high-resolution network，3DPAHRNet）具有更强的特征位置和细节信息的保留能力，能够显著改善在腹部CT（computed tomography）图像中所占比例较小的胰腺器官的分割结果。开源代码可在https：//github.com/qiuchengjian/PAHRNet3D获得。
Pancreas segmentation based on 3D path aggregation high-resolution network
Objective Accurate pancreas segmentation is an important prerequisite for the detection，identification，and analysis of pancreatic cancer. However，due to the small proportion of the pancreas in the input CT volume and the large variations in its position and shape，accurate pancreas segmentation has always been a challenging task. Most of the existing mainstream deep learning pancreas segmentation networks are based on the encoding-decoding structure，which initially reduces the resolution of the input image through continuous down-sampling in the encoder to capture strong semantics on a large receptive field，identify the complete pancreas，and gradually restore the lowest-resolution encoder features to obtain the predicted segmentation results. However，the continuous down-sampling in the encoder leads to the loss of location and details of features. Method To alleviate the above problem，this paper proposes a 3D path aggregation high-resolution network（3DPAHRNet）for pancreas segmentation. First，to capture additional 3D feature context information， the 2D convolution operation in the high-resolution network is extended to the 3D convolution operation. Second，this paper proposes a full-resolution path aggregation module that utilizes five consecutive nonlinear transformations to reduce the semantic difference between the full-resolution input and the output of the segmentation head network while reducing the impact of location and detail information loss due to the continuous down-sampling of the stem network on the segmentation results. Finally，this paper proposes a multi-scale feature path aggregation module that leverages the progressive feature channel compression and fusion strategy in order for the multi-scale features outputted by the high-resolution network to adaptively adjust the features in the network and avoid the problem of information content loss caused by the excessive compression of multi-scale low-resolution feature channels. Result To verify the effectiveness of the proposed method， extensive experiments are conducted on a public pancreas dataset. First，the segmentation results are compared with those of mainstream pancreas segmentation networks，including 3D U-Net，AttentionUNet，VNet，and 3D HRNet. Compared with the state-of-the-art segmentation results，the proposed method improves the Dice similarity coefficient，Jaccard index， precision，and recall by 1. 41%，2. 09%，2. 35%，and 0. 49%，respectively. Second，the effectiveness of the proposed module is verified by conducting three ablation studies. Experimental results show that when the number of down-sampling times in the stem subnetwork of 3DHRNet is reduced，either the full-resolution or multi-scale feature path aggregation module is added，and the average segmentation accuracy is significantly improved. Finally，the proposed method is compared with representative pancreas segmentation methods. Comparison results show that the proposed method improves the stateof-the-art segmentation accuracy by 1. 1%. Conclusion This paper proposes the 3DPAHRNet for pancreas segmentation. Unlike the use of high-resolution net on natural images，the proposed method not only keeps the high-resolution features in the network but also enables the network to retain additional location and detail features of the full-resolution input，thus significantly improving the performance of existing pancreas segmentation networks. The open-source code is available at https：//github. com/qiuchengjian/PAHRNet3D.