基于特征选择与残差融合的肝肿瘤分割模型
Feature selection and residual fusion segmentation network for liver tumor
- 2022年27卷第3期 页码:838-849
收稿:2021-04-21,
修回:2021-7-27,
录用:2021-8-3,
纸质出版:2022-03-16
DOI: 10.11834/jig.210250
移动端阅览

浏览全部资源
扫码关注微信
收稿:2021-04-21,
修回:2021-7-27,
录用:2021-8-3,
纸质出版:2022-03-16
移动端阅览
目的
2
高效的肝肿瘤计算机断层扫描(computed tomography,CT)图像自动分割方法是临床实践的迫切需求,但由于肝肿瘤边界不清晰、体积相对较小且位置无规律,要求分割模型能够细致准确地发掘类间差异。对此,本文提出一种基于特征选择与残差融合的2D肝肿瘤分割模型,提高了2D模型在肝肿瘤分割任务中的表现。
方法
2
该模型通过注意力机制对U-Net瓶颈特征及跳跃链接进行优化,为符合肝肿瘤分割任务特点优化传统注意力模块进,提出以全局特征压缩操作(global feature squeeze,GFS)为基础的瓶颈特征选择模块,即全局特征选择模块(feature selection module,FS)和邻近特征选择模块(neighbor feature selection module,NFS)。跳跃链接先通过空间注意力模块(spatial attention module,SAM)进行特征重标定,再通过空间特征残差融合(spatial feature residual fusion module,SFRF)模块解决前后空间特征的语义不匹配问题,在保持低复杂度的同时使特征高效表达。
结果
2
在LiTS(liver tumor segmentation)公开数据集上进行组件消融测试并与当前方法进行对比测试,在肝脏及肝肿瘤分割任务中的平均Dice得分分别为96.2%和68.4%,与部分2.5D和3D模型的效果相当,比当前最佳的2D肝肿瘤分割模型平均Dice得分高0.8%。
结论
2
提出的FSF-U-Net(feature selection and residual fusion U-Net)模型通过改进的注意力机制与优化U-Net模型结构的方法,使2D肝肿瘤分割的结果更加准确。
Objective
2
Liver cancer is currently one of the most common cancers with the highest mortality rate in the world. Computed tomography (CT) is a commonly used clinical tumor diagnosis method. It can aid to designate targeted treatment plans based on the shape and location of the tumor measurement. Manual segmentation of CT images has challenged issues
such as low efficiency and the influence of doctors' experience. Hence
an efficient automatic segmentation method is focused on in clinical practice. Liver treatment can benefit from accurate and fast automatic segmentation methods. Due to the low contrast of soft tissue in CT images
the shape and position of liver tumors are highly variable
and the boundaries of liver tumor regions are difficult to identify
most of the tumors area are relatively small
so automatic liver tumor segmentation is a challenging task in practice. The segmentation model is capable to discover the differences between each class accurately. Deep-learning-based models can be divided into three categories: 2D
2.5D and 3D
respectively. The traditional channel attention module uses the global average pooling (GAP) to squeeze feature map. This operation calculates the average value of the feature map straightforward
resulting in the loss of spatial information on the feature map. The model can focus on the correlation amongst channels and ignore the spatial features of each channel
but segmentation task is related to the spatial information. This research illustrated a liver tumor 2D segmentation model with feature selection and residual fusion to improve the performance of low-complexity models.
Method
2
The attention-mechanism-based model optimizes U-Net bottleneck features and redesigned skip connections. In order to meet the characteristics of liver tumor segmentation tasks
we optimized the traditional attention module. Our demonstration facilates the global feature squeeze (GFS) substitute of the global average pooling (GAP) in the traditional attention module. A designed bottleneck feature selection module is based on this attention module. In terms of the diversity of liver and liver tumor segmentation tasks
the feature selection (FS) module and the neighboring feature selection (NFS) module are evolved. The spatial information with the least amount of parameters greatly improves the accuracy of the segmentation task. Both modules can calibrate the channels adaptively. The difference is that the global feature selection module focuses on the conditions of all channels. Each channel proposes a type of semantic feature. The operation of the channel feature is to compress all channels to determine the correlation of all channels. It is suitable for segmentation tasks such as liver segmentation tasks that need to melt all the semantic information into the graph. The adjacent feature association module is oriented adjacent groups of channels and aims to identify the connection of adjacent semantic features
which is suitable for segmentation division tasks
such as liver tumor segmentation tasks. The spatial feature residual fusion (SFRF) module in U-Net skip connection is designated to resolve the semantic gap issue of U-Net skip connection and make full use of the effectiveness of spatial features. The spatial feature residual fusion module fill the semantic gap in the early skip connections via introducing mid-to-late high-level features. In order to avoid excessively affecting the early feature expression
the residual link method is adopted. The module uses 1×1 convolution compression for deep features. The bilinear interpolation to upsample the feature map is conducted following the channel. The skip connections are introduced to implement feature recalibration based on the spatial attention module (SAM). The spatial feature residual fusion module is used to resolve semantic mis-match issue between the front and rear spatial features
so that the features can be sorted out efficiently.
Result
2
Our research analysis performed component ablation tests on the LiTS public data set and compared it with the current method. Following the feature selection (FS/NFS) operation in U-Net bottleneck
the model is significantly improved compared to the baseline. The per Dice score of the liver segmentation prediction results is above 95%
which is about 37% better than the error prediction of the baseline. The tumor segmentation prediction scores were all above 65%. The baseline added spatial attention module (SAM) and spatial feature residual fusion (SFRF) module to the skip connection. The FS module and the NFS module achieved the highest per Dice score in liver segmentation and liver tumor segmentation tasks
respectively. In the liver and liver tumor segmentation tasks
per Dice score of 96.2% and 68.4% were obtained
respectively. This analysis result is comparable to 2.5D and 3D. The effect of the model is equivalent
0.8% higher than the per Dice score of the current 2D liver tumor segmentation model.
Conclusion
2
Our demonstration delivered a liver tumor 2D segmentation model based on feature selection and residual fusion. The model realized the function of the channel degree via the bottleneck feature selection module
effectively inhibits the invalid features
and improves the accuracy of the prediction results. To optimize the skip connection and fill the semantic gap of U-Net
the spatial features can be facilitated. The segmentation effect of the model is further improved. The Experiments show that the proposed model has qualified on the LiTS dataset
especially in the 2D segmentation analysis.
Bilic P, Christ P F, Vorontsov E, Chlebus G, Chen H, Dou Q, Fu C W, Han X, Heng P A, Hesser J, Kadoury S, Konopczynski T, Le M, Li C M, Li X M, LipkovàJ, Lowengrub J, Meine H, Moltz J H, Pal C, Piraud M, Qi X J, Qi J, Rempfler M, Roth K, Schenk A, Sekuboyina A, Vorontsov E, Zhou P, Hülsemeyer C, Beetz M, Ettlinger F, Gruen F, Kaissis G, Lohöfer F, Braren R, Holch J, Hofmann F, Sommer W, Heinemann V, Jacobs C, Mamani G E H, van Ginneken B, Chartrand G, Tang A, Drozdzal M, Ben-Cohen A, Klang E, Amitai M M, Konen E, Greenspan H, Moreau J, Hostettler A, Soler L, Vivanti R, Szeskin A, Lev-Cohain N, Sosna J, Joskowicz L and Menze B H. 2019. The liver tumor segmentation benchmark (LiTS)[EB/OL] . [2021-04-21]. https://arxiv.org/pdf/1901.04056.pdf https://arxiv.org/pdf/1901.04056.pdf
Chattopadhyay S and Basak H. 2020. Multi-scale attention U-Net (MsAU-Net): a modified U-Net architecture for scene segmentation[EB/OL]. [2021-04-21] . https://arxiv.org/pdf/2009.06911.pdf https://arxiv.org/pdf/2009.06911.pdf
Chen X Y, Zhang R and Yan P K. 2019. Feature fusion encoder decoder network for automatic liver lesion segmentation//The 16th IEEE International Symposium on Biomedical Imaging. Venice, Italy: IEEE: 430-433[ DOI: 10.1109/ISBI.2019.8759555 http://dx.doi.org/10.1109/ISBI.2019.8759555 ]
Chlebus G, Schenk A, Moltz J H, van Ginneken B, Hahn H K and Meine H. 2018. Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing. Scientific Reports, 8(1): #15497[DOI:10.1038/s41598-018-33860-7]
Dey R and Hong Y. 2020. Hybrid cascaded neural network for liver lesion segmentation//Proceedings of the 17th IEEE International Symposium on Biomedical Imaging (ISBI). Iowa City, USA: IEEE: 1173-1177[ DOI: 10.1109/ISBI45749.2020.9098656 http://dx.doi.org/10.1109/ISBI45749.2020.9098656 ]
Drozdzal M, Vorontsov E, Chartrand G, Kadoury S and Pal C. 2016. The importance of skip connections in biomedical image segmentation. Deep learning and data labeling for medical applications: Springer, Cham: 179-187[ DOI: 10.1007/978-3-319-46976-8_19 http://dx.doi.org/10.1007/978-3-319-46976-8_19 ]
Han X. 2017. Automatic liver lesion segmentation using a deep convolutional neural network method[EB/OL]. [2021-04-21] . https://arxiv.org/pdf/1704.07239.pdf https://arxiv.org/pdf/1704.07239.pdf
Hu J, Shen L and Sun G. 2018. Squeeze-and-excitation networks//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 7132-7141[ DOI: 10.1109/CVPR.2018.00745 http://dx.doi.org/10.1109/CVPR.2018.00745 ]
Ibtehaz N and Rahman M S. 2020. MultiResU-Net: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Networks, 121: 74-87[DOI:10.1016/j.neU-Net.2019.08.025]
Li X M, Chen H, Qi X J, Dou Q, Fu C W and Heng P A. 2018. H-DenseU-Net: hybrid densely connected U-Net for liver and tumor segmentation from CT volumes. IEEE Transactions on Medical Imaging, 37(12): 2663-2674[DOI:10.1109/TMI.2018.2845918]
Liao M, Liu Y Z, Ouyang J L, Yu J Y, Zhao Y Q and Zhang B Z. 2019. Automatic segmentation of liver tumor in CT volumes using nonlinear enhancement and graph cuts. Journal of Computer-Aided Design and Computer Graphics, 31(6): 1030-1038
廖苗, 刘毅志, 欧阳军林, 余建勇, 赵于前, 张宝泽. 2019. 基于非线性增强和图割的CT序列肝脏肿瘤自动分割. 计算机辅助设计与图形学学报, 31(6): 1030-1038[DOI:10.3724/SP.J.1089.2019.17258]
Liu Y P, Liu G P, Wang R F, Jin R, Sun D C, Qiu H, Dong C, Li J and Hong G B. 2020. Accurate segmentation method of liver tumor CT based on the combination of deep learning and radiomics. Journal of Image and Graphics, 25(10): 2128-2141
刘云鹏, 刘光品, 王仁芳, 金冉, 孙德超, 邱虹, 董晨, 李瑾, 洪国斌. 2020. 深度学习结合影像组学的肝脏肿瘤CT分割. 中国图象图形学报, 25(10): 2128-2141[DOI:10.11834/jig.200198]
Oktay O, Schlemper J, Le Folgoc L L, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla N Y, Kainz B, Glocker B and Rueckert D. 2018. Attention U-Net: learning where to look for the pancreas[EB/OL]. [2021-04-21] . https://arxiv.org/pdf/1804.03999.pdf https://arxiv.org/pdf/1804.03999.pdf
Ronneberger O, Fischer P and Brox T. 2015. U-net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Roy A G, Navab N and Wachinger C. 2018. Concurrent spatial and channel "squeeze and excitation" in fully convolutional networks//Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention. Granada, Spain: Springer: 421-429[ DOI: 10.1007/978-3-030-00928-1_48 http://dx.doi.org/10.1007/978-3-030-00928-1_48 ]
Simonyan K and Zisserman A. 2014. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2021-04-21] . https://arxiv.org/pdf/1409.1556.pdf https://arxiv.org/pdf/1409.1556.pdf
Sinha A and Dolz J. 2021. Multi-scale self-guided attention for medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 25(1): 121-130[DOI:10.1109/JBHI.2020.2986926]
Vorontsov E, Tang A, Pal C and Kadoury S. 2018. Liver lesion segmentation informed by joint liver segmentation//The 15th IEEE International Symposium on Biomedical Imaging (ISBI 2018). Washington, USA: IEEE: 1332-1335[ DOI: 10.1109/ISBI.2018.8363817 http://dx.doi.org/10.1109/ISBI.2018.8363817 .]
Wang Q L, Wu B G, Zhu P F, Li P H, Zuo W M and Hu Q H. 2020. ECA-Net: efficient channel attention for deep convolutional neural networks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 11531-11539[ DOI: 10.1109/cvpr42600.2020.01155 http://dx.doi.org/10.1109/cvpr42600.2020.01155 ]
Woo S, Park J, Lee J Y and Kweon I S. 2018. CBAM: convolutional block attention module//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 3-19[ DOI: 10.1007/978-3-030-01234-2_1 http://dx.doi.org/10.1007/978-3-030-01234-2_1 ]
Yuan Y D. 2017. Hierarchical convolutional-deconvolutional neural networks for automatic liver and tumor segmentation[EB/OL]. [2021-04-21] . https://arxiv.org/pdf/1710.04540.pdf https://arxiv.org/pdf/1710.04540.pdf
Zhang J P, Xie Y T, Zhang P P, Chen H, Xia Y and Shen C H. 2019. Light-weight hybrid convolutional network for liver tumor segmentation//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China: IJCAI: 4271-4277[ DOI: 10.24963/ijcai.2019/593 http://dx.doi.org/10.24963/ijcai.2019/593 ]
相关作者
相关机构
京公网安备11010802024621