LRUNet: 轻量级脑肿瘤快速语义分割网络
LRUNet: a lightweight rapid semantic segmentation network for brain tumors
- 2021年26卷第9期 页码:2233-2242
收稿:2020-08-06,
修回:2020-10-22,
录用:2020-10-29,
纸质出版:2021-09-16
DOI: 10.11834/jig.200436
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-08-06,
修回:2020-10-22,
录用:2020-10-29,
纸质出版:2021-09-16
移动端阅览
目的
2
针对目前基于深度学习的脑肿瘤分割算法参数量大、计算复杂和快速性差的问题,提出了一种超轻量级快速语义分割网络LRUNet(lightweight rapid UNet),在保证分割精度提升的同时,极大地减少了网络的参数量与计算量,达到快速分割的效果。
方法
2
LRUNet网络结构基于UNet,将3D-UNet的通道数减少为原来的1/4,减少原先3D-UNet过多的参数量;将UNet网络中除最后一层外的所有传统卷积变为深度可分离卷积,深度可分离卷积以牺牲极少精度,大大减少网络参数量,实现网络的轻量级;使用空间—通道压缩和激发模块(spatial and channel squeeze&excitation block,scSE),该模块能够放大特征图中对模型有利的参数的权重,缩小对模型不利参数的权重,提升网络分割的精度。
结果
2
在BraTS 2018(Brain Tumor Segmentation Challenge 2018)数据集上的在线验证结果显示,该模型在全肿瘤、核心区肿瘤和增强区肿瘤分割的平均Dice系数分别为0.893 6、0.804 6和0.787 2。LRUNet与同为轻量级网络的S3D-UNet相比Dice有所提升,但是,参数量仅为S3D-UNet的1/4,FLOPs(floating point operations per second)仅为1/2。
结论
2
与3D-UNet、S3D-UNet和3D-ESPNet等算法相比,LRUNet算法不仅保证精度得到提升,而且极大地减少网络中计算的参数量与计算成本消耗,同时网络模型的预测速度得到很大提升,使得快速语义分割在3维医学图像领域成为可能。
Objective
2
The brain tumor has been divided into primary and secondary tumors types. Glioma has been divided into lower glial tumors and higher glial tumors. Magnetic resonance imaging(MRI) has been a vital diagnostic tool for brain tumor analysis
detection and surgical planning. Accurate segmentation of brain tumors has been crucial for diagnosis and treatment planning. Manual segmentation has required senior doctors to spend a lot of time to complete nowadays. Automatic brain tumor segmentation has been applied instead manual segmentation further. The intensified profile of the tumor area has overlapped significantly with a healthy portion.
Method
2
This research has bridged the gap between the efficiency and accuracy of 3D MRI brain tumor segmentation models. A light-weighted rapid semantic segmentation network called LRUNet has been demonstrated. LRUNet has improved the segmentation accuracy and achieved the effect of lightweight
high precision and rapid semantic segmentation in comparison with the existing network. The amount of parameters of these networks has been deleted compared with the algorithms in order to achieve the lightweight effects. At the beginning
the number of channels in the existed 3D-UNet has deducted by four times in each output layer to reduce the number of network parameters dramatically. Next
existed 3D convolution has been excluded and deep separable convolution has been applied to 3D convolution to reduce the number of network parameters on the premise of maintaining accuracy greatly. At last
the convolution-based feature map has not been beneficial to the model entirely. The weight of parameters based on space and channel compression & excitation module has been strengthened to improve the model in the feature map
to reduce the weight of redundant parameters and to improve the performance of the model. Based on 3D-UNet
the number of channels has been reduced 4 times via each convolution. The network becomes more trainable because fewer channels lead to fewer parameters. Three dimensional depth separable convolutions have de-composed the standard convolution into deep convolution and point convolution of 1×1×1. A standard convolutional layer has been integrated to filter and merge into one output. Deep separable convolution has divided the convolution into two layers for filtering and merging each. The effect of this factorization has greatly reduced computation and model size. The application of deep separable convolution has made the network lightweight to realize fast semantic segmentation. The accuracy of the network has not still been improved. The space and channel compression & excitation module have generated a tensor to represent the importance of the feature map in space or channel direction via compressing and exciting the feature map in space or channel direction. The enhancement of important channels or spatial points has been facilitated. The neglect of unimportant channels or spatial points has been weakened. The space and channel compression & excitation module have yielded the network to remain lightweight under no circumstances of increasing the number of arguments. In addition
the accuracy of the network and the training accuracy of the model have been improved simultaneously. First
the tumors contained in the previously given segmentation map have been synthesized to make larger tumor's training area. Second
the best model of intersection over union(IOU) in the validation set has been the optimal parameters. Thirdly
binary cross entropy(BCE) Dice loss has been adopted as the loss function to solve the class imbalance of the foreground and background of the data set itself. Finally
the predicted results have been submitted online to ensure the fairness of the algorithm.
Result
2
The model has been tested in the Brain Tumor Segmentation Challenge 2018(BraTS 2018) online validation experiment. The average Dice coefficients of tumor segmentation in whole tumor
core tumor and enhanced tumor region have reached 0.893 6
0.804 6 and 0.787 2 respectively. Compared with 3D-UNet
S3D-UNET
3D-ESPNET and other algorithms
LRUNet has not only assured the improvement of accuracy
but also greatly reduced the consumption of computational parameters and computational costs in the network.
Conclusion
2
A new light-weighted UNet network with only 0.97 MB parameters has been developed to 31 GB floating point operations per second(FLOPs) approximately. The number of parameters has been acquired only 1/16 of the 3D-UNet and the FLOPs have reached 1/52 of the 3D-UNet. The illustrated verification has demonstrated that the great advantages in both performance and number of network parameters have been leaked out based on calculated algorithm (note: the segmentation results have been closest to the true tag). The lightweight and efficient nature of the network has been beneficial to the large-scale 3D medical data sets processing.
Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J S, Freymann J B, Farahani K and Davatzikos C. 2017. Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data, 4(1): 1-4[DOI: 10.1038/sdata.2017.117]
Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, Shinohara R T, Berger C, Ha S M, Rozycki M and Prastawa M. 2018. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge[EB/OL]. [2020-07-12] . https://arxiv.org/pdf/1811.02629.pdf https://arxiv.org/pdf/1811.02629.pdf
Chen C, Liu X P, Ding M, Zheng J F and Li J Y. 2019. 3D dilated multi-fiber network for real-time brain tumor segmentation in MRI//Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Shenzhen, China: Springer: 184-192[ DOI: 10.1007/978-3-030-32248-9_21 http://dx.doi.org/10.1007/978-3-030-32248-9_21 ]
Chen W, Liu B Q, Peng S, Sun J T and Qiao X. 2018. S3D-UNet: separable 3D U-Net for brain tumor segmentation//Proceedings of International MICCAI Brainlesion Workshop. Granada, Spain: Springer: 358-368[ DOI: 10.1007/978-3-030-11726-9_32 http://dx.doi.org/10.1007/978-3-030-11726-9_32 ]
Chollet F. 2017. Xception: deep learning with depthwise separable convolutions//Proceedings of 2007 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE: 1251-1258[ DOI: doi:10.1109/cvpr.2017.195 http://dx.doi.org/doi:10.1109/cvpr.2017.195 ]
ÇiçekÖ, Abdulkadir A, Lienkamp S S, Brox T and Ronneberger O. 2016. 3D U-Net: learning dense volumetric segmentation from sparse annotation. //Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens, Greece: Springer: 424-432[ DOI: 10.1007/978-3-319-46723-8_49 http://dx.doi.org/10.1007/978-3-319-46723-8_49 ]
Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal C, Jodoin P M and Larochelle H. 2017. Brain tumor segmentation with deep neural networks. Medical Image Analysis, 35: 18-31[DOI: 10.1016/j.media.2016.05.004]
He H and Chen S. 2020. Automatic tumor segmentation in PET by deep convolutional U-Net with pre-trained encoder. Journal of Image Graphics, 25(1): 171-179
何慧, 陈胜. 2020. 改进预训练编码器U-Net模型的PET肿瘤自动分割. 中国图象图形学报, 25(1): 171-179[DOI: 10.11834/jig.190058]
Howard A G, Zhu M L, Chen B, Kalenichenko D, Wang W J, Weyand T, Andreetto M and Adam H. 2017. Mobilenets: efficient convolutional neural networks for mobile vision applications[EB/OL]. [2020-07-11] . https://arxiv.org/pdf/1704.04861.pdf https://arxiv.org/pdf/1704.04861.pdf
Hu J, Shen L and Sun G. 2018. Squeeze-and-excitation networks//Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE: 7132-7141[ 10.1109/cvpr.2018.00745 http://dx.doi.org/10.1109/cvpr.2018.00745 ]
Isensee F, Kickingereder P, Wick W, Bendszus M and Maier-Hein K H. 2017. Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge//Proceedings of International MICCAI Brainlesion Workshop. Quebec City, Canada: Springer: 287-297[ DOI: 10.1007/978-3-319-7238-9_25 http://dx.doi.org/10.1007/978-3-319-7238-9_25 ]
Jiang Z K, Lyu X G, Zhang J X, Zhang Q and Wei X P. 2020. Review of deep learning methods for MRI brain tumor image segmentation. Journal of Image and Graphics, 25(2): 215-228
江宗康, 吕晓钢, 张建新, 张强, 魏小鹏. 2020. MRI脑肿瘤图像分割的深度学习方法综述. 中国图象图形学报, 25(2): 215-228[DOI: 10.11834/jig.190173]
Kamnitsas K, Bai W, Ferrante E, McDonagh S, Sinclair M, Pawlowski N, Rajchl M, Lee M, Kainz B, Rueckert D and Glocker B. 2017. Ensembles of multiple models and architectures for robust brain tumour segmentation//Proceedings of International MICCAI Brainlesion Workshop. Quebec City, Canada: Springer: 450-462[ DOI: 10.1007/978-3-319-75238-9_38 http://dx.doi.org/10.1007/978-3-319-75238-9_38 ]
Kingma D P and Ba J. 2017. Adam: a method for stochastic optimization[EB/OL]. [2020-07-11] . https://arxiv.org/pdf/1412.6980.pdf https://arxiv.org/pdf/1412.6980.pdf
Liu C, Xiao Z Y and Du N M. 2019. Application of imporved convolutional neural network in medical image segmentation. Journal of Frontiers of Computer Science and Technology, 13(9): 1593-1603
刘辰, 肖志勇, 杜年茂. 2019. 改进的卷积神经网络在医学图像分割上的应用. 计算机科学与探索, 13(9): 1593-1603[DOI: 10.3778/j.issn.1673-9418.1904009]
Liu Z H, Chen L, Tong L, Zhou F X, Jiang Z H, Zhang Q N, Shan C F, Wang Y H, Zhang X R, Li L and Zhou H Y. 2020. Deep learning based brain tumor segmentation: a survey[EB/OL]. [2020-07-21] . https://arxiv.org/pdf/2007.09479.pdf https://arxiv.org/pdf/2007.09479.pdf
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 3431-3440[ DOI: 10.1109/CVPR.2015.7298965 http://dx.doi.org/10.1109/CVPR.2015.7298965 ]
Menze B H, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R and Lanczi L. 2014. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 34(10): 1993-2024[DOI: 10.1109/TMI.2014.2377694]
Milletari F, Navab N and Ahmadi S A. 2016. V-net: fully convolutional neural networks for volumetric medical image segmentation//Proceedings of the 4th International Conference on 3D Vision. San Francisco, USA: IEEE: 565-577[ DOI: 10.1109/3DV.2016.79 http://dx.doi.org/10.1109/3DV.2016.79 ]
Nuechterlein N and Mehta S. 2018. 3D-ESPNet with pyramidal refinement for volumetric brain tumor image segmentation//Proceedings of International MICCAI Brainlesion WorkshopGranada, Spain: Springer: 245-253[ DOI: 10.1007/978-3-030-11726-9_22 http://dx.doi.org/10.1007/978-3-030-11726-9_22 ]
Ronneberger O, Fischer P and Brox T. 2015. U-net: Convolutional networks for biomedical image segmentation//Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Spain: Springer: 234-241[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Roy A G, Navab N and Wachinger C. 2018. Concurrent spatial and channel "squeeze & excitation" in fully convolutional networks//Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Granada, Spain: Springer: 421-429[ DOI: 10.1007/978-3-030-00928-1_48 http://dx.doi.org/10.1007/978-3-030-00928-1_48 ]
相关作者
相关机构
京公网安备11010802024621