Current Issue Cover
融合空洞卷积与注意力的胃癌组织切片分割

陈颍锶, 李晗, 周雪婷, 万程(南京航空航天大学电子信息工程学院, 南京 211106)

摘 要
目的 病理组织切片检查是诊断胃癌的金标准,准确发现切片中的病变区域有助于及时确诊并开展后续治疗。然而,由于病理切片图像的复杂性、病变细胞与正常细胞形态差异过小等问题,传统的语义分割模型并不能达到理想的分割效果。基于此,本文提出了一种针对病理切片的语义分割方法ADEU-Net (attention-dilated-efficient U-Net++),提高胃癌区域分割的精度,实现端到端分割。方法 ADEU-Net使用经过迁移学习的EfficientNet作为编码器部分,增强图像特征提取能力。解码器采用了简化的U-Net++短连接方式,促进深浅层特征融合的同时减少网络参数量,并重新设计了其中的卷积模块提高梯度传递能力。中心模块使用空洞卷积对编码器输出结果进行多尺度的特征提取,增强模型对不同尺寸切片的鲁棒性。编码器与解码器的跳跃连接使用了注意力模块,以抑制背景信息的特征响应。结果 在2020年“华录杯”江苏大数据开发与应用大赛(简称“SEED”大赛)数据集中与其他经典方法比较,验证了一些经典模型在该分割任务中难以拟合的问题,同时实验得出修改特征提取方式对结果有较大提升,本文方法在分割准确度上比原始U-Net提高了18.96%。在SEED数据集与2017年中国大数据人工智能创新创业大赛(brain of things,BOT)数据集中进行了消融实验,验证了本文方法中各个模块均有助于提高病理切片的分割效果。在SEED数据集中,本文方法ADEU-Net比基准模型在Dice系数、准确度、敏感度和精确度上分别提升了5.17%、2.7%、3.69%、4.08%;在BOT数据集中,本文方法的4项指标分别提升了0.47%、0.06%、4.30%、6.08%。结论 提出的ADEU-Net提升了胃癌病理切片病灶点分割的精度,同时具有良好的泛化性能。
关键词
The fusing of dilated convolution and attention for segmentation of gastric cancer tissue sections

Chen Yingsi, Li Han, Zhou Xueting, Wan Cheng(College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China)

Abstract
Objective As the gold standard for the diagnosis of gastric cancer, pathological section has been a hotspot nowadays. The degree of precise detection of the lesion area in the section has rather beneficial to in-situ diagnosis and follow-up treatment. The pathologists have missed some subtle changes in cancerous cells in their practice. Automated gastric cancer cells segmentation has been aided to diagnose. Deep learning-based pathological section image of stomach have been obtained qualified classification via deep convolutional neural networks (DCNNs). Focused segmentation for pathological section has been challenged to some issues as below. First, the color and morphology between gastric cancer cells and normal cells to extract deep features. Second, the different magnifications in pathological section images have been hard to segment in different sizes. A semantic segmentation neural network called attention-dilated-efficient U-Net++ (ADEU-Net) has been demonstrated rather than original U-Net to facilitate the precision of gastric cancer cell segmentation. Method The illustrated framework is an encoder-decoder networks which can achieve end-to-end training. The capability of the encoder has affected the segmentation accuracy based on the deep features of pathological section images interpretation. The featured extraction part of EfficientNet has been adopted as the encoder via qualified classification. The initialized weights of EfficientNet have been pre-trained on ImageNet and its structure have been divided into five stages for the skipped cohesions. The decoder has been designed in terms of the structure of U-Net++. The encoder and decoder sub-networks have been via the integration of nested, dense skip pathways. An 8 GB GPU model training has been implemented based on the most skip cohesions in U-Net++. The convolution blocks in the decoder has been re-modified to the gradient transfer issues. An additional module called DBlock has been integrated to enhance the feature extraction capability for multi-sized pathological sections. Three multi-layers dilation rates convolution have been cascaded in DBlock to realize the features in receptive fields. The dilation rates of the stacked dilated convolution layers have been calculated to 1,2,5 and the receptive field of each layer has been realized 3,7,17 each in terms of the structure of hybrid dilated convolution (HDC). The featured maps have been concatenated by channel and fused via a 1×1 convolution layer to realize multi-scale features simultaneously. The attention mechanism has been used to replace the skip connection between the encoder and the decoder to suppress the feature correspondence of the background region effectively as well. The outputs of the encoder and the decoder of the upper layer have been conducted each based on a 1×1 convolution layer. The optional assigned weights to the parameters of the original feature map have been added together to form the attention gate. deep supervised learning can be altered to solve the low speed convergence in the training process. Result The experiments on two datasets called SEED and BOT have been conducted to verify the effectiveness of the method obtained from two gastric cancer cell section segmentation competition. The evaluation metrics of the models have Dice coefficient, sensitivity, pixel-wise accuracy and precision. Different segmentation results have also been calculated visually. First the baseline method has been compared to some classical models on SEED dataset in 18.96% accuracy higher than original U-Net and it has been found that the design of feature extraction has been crucial to the segmentation accuracy. Transfer-learning strategies of the encoder have been improved the results greatly. The further ablation experiments have been performed to each added module to confirm the results of segmentation., The Dice coefficient, sensitivity, accuracy and precision has been increased by 5.17% and 0.47%,2.7% and 0.06%,3.69% and 4.30%, 4.08% and 6.08% each compared with the baseline model's results with SEED and BOT. The results have demonstrated the effectiveness of each part of proposed algorithm. The visual segmentation results have more similar to the ground truth label. Conclusion A semantic segmentation model called ADEU-Net has been illustrated to the segmentation of pathological sections of gastric cancer task. The involvement of EfficientNet has beneficial to feature extraction, multi-scale features assembling cascade dilated convolution layers and the attention module in replace of the skip connection between the encoder and decoder.
Keywords

订阅号|日报