语义拉普拉斯金字塔多中心乳腺肿瘤分割网络
Semantic Laplacian pyramids network for multicenter breast tumor segmentation
- 2021年26卷第9期 页码:2193-2207
纸质出版日期: 2021-09-16 ,
录用日期: 2021-05-19
DOI: 10.11834/jig.210138
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2021-09-16 ,
录用日期: 2021-05-19
移动端阅览
王黎, 曹颖, 郭顺超, 唐雷, 郐子翔, 王荣品, 王丽会. 语义拉普拉斯金字塔多中心乳腺肿瘤分割网络[J]. 中国图象图形学报, 2021,26(9):2193-2207.
Li Wang, Ying Cao, Shunchao Guo, Lei Tang, Zixiang Kuai, Rongpin Wang, Lihui Wang. Semantic Laplacian pyramids network for multicenter breast tumor segmentation[J]. Journal of Image and Graphics, 2021,26(9):2193-2207.
目的
2
乳腺肿瘤分割对乳腺癌的辅助诊疗起着关键作用,但现有研究大多集中在单中心数据的分割上,泛化能力不强,无法应对临床的复杂数据。因此,本文提出一种语义拉普拉斯金字塔网络(semantic Laplacian pyramids network,SLAPNet),实现多中心数据下乳腺肿瘤的准确分割。
方法
2
SLAPNet主要包含高斯金字塔和语义金字塔两个结构,前者负责得到多尺度的图像输入,后者负责提取多尺度的语义特征并使语义特征能在不同尺度间传播。
结果
2
网络使用Dice相似系数(Dice similarity coefficient,DSC)作为优化目标。为了验证模型性能,采用多中心数据进行测试,与AttentionUNet、PSPNet(pyramid scene parsing network)、UNet 3+、MSDNet(multiscale dual attention network)、PyConvUNet(pyramid convolutional network)等深度学习模型进行对比,并利用DSC和Jaccard系数(Jaccard coefficient,JC)等指标进行定量分析。使用内部数据集测试时,本文模型乳腺肿瘤分割的DSC为0.826;使用公开数据集测试时,DSC为0.774,比PyConvUNet提高了约1.3%,比PSPNet和UNet3+提高了约1.5%。
结论
2
本文提出的语义拉普拉斯金字塔网络,通过结合多尺度和多级别的语义特征,可以在多中心数据上准确实现乳腺癌肿瘤的自动分割。
Objective
2
Accurate diagnosis and early prognosis of breast cancer can increase the survival rates of breast cancer patients. In clinical applications
the process of breast cancer treatment often contains neoadjuvant chemotherapy (NAC) which attempts to reduce tumor size and increase the chance of breast-conserving surgery. However
some patients do not respond positively to NAC and do not show a pathologically complete response. For these patients
NAC is time consuming and highly risky. Therefore
exploring an efficient method for precisely predicting NAC response is essential. A potential scheme is to use medical imaging techniques
such as magnetic resonance imaging in building a computer-assisted diagnosis (CAD) system for predicting NAC response. Most existing CAD methods focus on tumor features
which are highly related to region of interest (ROI) segmentations. At present
breast tumor is segmented manually
and this method cannot satisfy real-time and accurate segmentation requirements. Automatic breast tumor segmentation is a potential way to deal with such issue. Although numerous works about breast tumor segmentation have been proposed and some of them have achieved good results
they mainly focus on the segmentation of single-center datasets. How to improve the generalization ability of a model and ensure its good performance in multicenter datasets is still presents great challenge. To address this problem
we proposed a semantic Laplacian pyramid network (SLAPNet) for segmenting breast tumor with multicenter datasets.
Method
2
SLAPNet is composed of Gaussian and semantic pyramids. The Gaussian pyramid is used for creating multilevel inputs to enable the model to notice not only global image features
such as shape and gray-level distribution
but also local image features
such as edges and textures. It is implemented by smoothing and downsampling input images with Gaussian filters
which can denoise the images and blur details. Thus
the characteristics of large structures in the images can be highlighted. By combining these multiscale inputs
SLAPNet is more robust and generalized
so it can handle irregular objects. The semantic pyramid is produced first after UNet extracts deep semantic features with multilevel inputs and then connects adjacent layers to transfer deep semantic features to different layers. This strategy fuses multi-semantic-level and multilevel features to improve model performance. To reduce the influence of class imbalance
we selected Dice loss as our loss function. To validate the superiority of the proposed method
we trained SLAPNet and other state-of-the-art models with multicenter datasets. Finally
the accuracy (ACC)
specificity
sensitivity (SEN)
Dice similarity coefficient (DSC)
precision
and Jaccard coefficient were used in quantitatively analyzing the segmentation results.
Result
2
Compared with Attention UNet
DeeplabV3
fully convolutional network(FCN)
pyramid scene parsing network(PSPNet)
UNet
UNet3+
multiscale dual attention network(MSDNet)
and pyramid convolutional network(PyConvUNet)
the DSC of our model was the highest
with a value of 0.83 when the model was tested on the dataset acquired from Harbin Medical University Cancer Hospital and a value of 0.77 when the model was tested on the public I-SPY 1(investigation of aerial studies to predict your therapeutic response with imaging and moLecular analysis 1) dataset
increasing by at least 1.3%. The visualization results illustrated that SLAPNet produced a small amount of misclassification and omission in the marginal regions and the segmented edge was better than the segmented edges of the other models. The visualization results of error maps indicated that SLAPNet outperformed other models in breast tumor segmentation. Finally
to further validate the stability of the proposed model
we provided the boxplots of the evaluation metrics
which demonstrated that the DSC
Jaccard coefficient
SEN
and ACC of the proposed model were higher than those of the other models and the three quartiles of the proposed model were closer
indicating that SLAPNet was more stable for multicenter breast tumor segmentation.
Conclusion
2
The semantic Laplacian pyramid network proposed in this paper extracted deep semantic features from multilevel inputs and then fused multiscale semantic deep features. This structure guaranteed the high expressive ability of the deep features. We were able to capture more expressive features related to image details by combining multiscale semantic features. Therefore
our proposed model can better distinguish edges and texture features in tumors. The results demonstrated that the pyramid model showed the best performance in multicenter breast cancer tumor segmentation.
乳腺肿瘤分割深度学习语义金字塔多尺度语义特征多中心数据集
breast tumor segmentationdeep learningsemantic pyramidsmultiscale semantic featuremulticenter dataset
Benjelloun M, El Adoui M, Larhmam M A and Mahmoudi S A. 2018. Automated breast tumor segmentation in DCE-MRI using deep learning//Proceedings of the 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech). Brussels, Belgium: IEEE: 1-6[DOI:10.1109/CloudTech.2018.8713352http://dx.doi.org/10.1109/CloudTech.2018.8713352]
Bray F, Ferlay J, Soerjomataram I, Siegel R L, Torre L A and Jemal A. 2018. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 68(6): 394-424[DOI:10.3322/caac.21492]
Burt P J and Adelson E H. 1987. The Laplacian pyramid as a compact image code//Fischler M A and Firschein O, eds. Readings in Computer Vision. Amsterdam: Elsevier: 671-679
Chen L C, Papandreou G, Schroff F and Adam H. 2017. Rethinking atrous convolution for semantic image segmentation[EB/OL]. [2020-02-08].https://arxiv.org/pdf/1706.05587.pdfhttps://arxiv.org/pdf/1706.05587.pdf
Chen M J, Zheng H, Lu C S, Tu E M, Yang J and Kasabov N. 2018. A spatio-temporal fully convolutional network for breast lesion segmentation in DCE-MRI//Proceedings of the 25th International Conference on Neural Information Processing. Siem Reap, Cambodia: Springer: 358-368[DOI:10.1007/978-3-030-04239-4_32http://dx.doi.org/10.1007/978-3-030-04239-4_32]
Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L and Prior F. 2013. The cancer imaging archive (TCIA): maintaining and operating a public information repository. Journal of Digital Imaging, 26(6): 1045-1957[DOI:10.1007/s10278-013-9622-7]
Dabass J, Arora S, Vig R and Hanmandlu M. 2019. Segmentation techniques for breast cancer imaging modalities-a review//Proceedings of the 9th International Conference on Cloud Computing, Data Science and Engineering (Confluence). Noida, India: IEEE: 658-663[DOI:10.1109/CONFLUENCE.2019.8776937http://dx.doi.org/10.1109/CONFLUENCE.2019.8776937]
El Adoui M, Mahmoudi S A, Larhmam M A and Benjelloun M. 2019. MRI breast tumor segmentation using different encoder and decoder CNN architectures. Computers, 8(3): #52[DOI:10.3390/computers8030052]
Ghiasi G and Fowlkes C C. 2016. Laplacian pyramid reconstruction and refinement for semantic segmentation//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 519-534[DOI:10.1007/978-3-319-46487-9_32http://dx.doi.org/10.1007/978-3-319-46487-9_32]
Giannini V, Mazzetti S, Marmo A, Montemurro F, Regge D and Martincich L. 2017. A computer-aided diagnosis (CAD) scheme for pretreatment prediction of pathological response to neoadjuvant therapy using dynamic contrast-enhanced MRI texture features. The British Journal of Radiology, 90(1077): #20170269[DOI:10.1259/bjr.20170269]
Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 2261-2269
Huang H M, Lin L F, Tong R F, Hu H J, Zhang Q W, Iwamoto Y, Han X H, Chen Y W and Wu J. 2020. UNet 3+: a full-scale connected UNet for medical image segmentation//Proceedings of ICASSP 2020-2020 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP). Barcelona, Spain: IEEE: 1055-1059[DOI:10.1109/ICASSP40776.2020.9053405http://dx.doi.org/10.1109/ICASSP40776.2020.9053405]
Jiao H, Jiang X H, Pang Z Y, Lin X F, Huang Y H and Li L. 2020. Deep convolutional neural networks-based automatic breast segmentation and mass detection in DCE-MRI. Computational and Mathematical Methods in Medicine, 2020: #2413706[DOI:10.1155/2020/2413706]
Kim Y, Kim S H, Song B J, Kang B J, Yim K I, Lee A and Nam Y. 2018. Early prediction of response to neoadjuvant chemotherapy using dynamic contrast-enhanced MRI and ultrasound in breast cancer. Korean Journal of Radiology, 19(4): 682-691[DOI:10.3348/kjr.2018.19.4.682]
Li C Y, Fan Y X and Cai X D. 2021. PyConvU-Net: a lightweight and multiscale network forbiomedical image segmentation. BMC Bioinformatics, 22(1): #14[DOI:10.1186/s12859-020-03943-2]
Liu Z Y, Li Z L, Qu J R, Zhang R Z, Zhou X Z, Li L F, Sun K, Tang Z C, Jiang H, Li H L, Xiong Q Q, Ding Y Y, Zhao X, Wang K, Liu Z Y and Tian J. 2019. Radiomics of multiparametric MRI for pretreatment prediction of pathologic complete response to neoadjuvant chemotherapy in breast cancer: a multicenter study. Clinical Cancer Research, 25(12): 3538-3547[DOI:10.1158/1078-0432.CCR-18-3190]
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE: 3431-3440[DOI:10.1109/CVPR.2015.7298965http://dx.doi.org/10.1109/CVPR.2015.7298965]
Masood S. 2016. Neoadjuvant chemotherapy in breast cancers. Women's Health, 12(5): 480-491[DOI:10.1177/1745505716677139]
Men K, Zhang T, Chen X Y, Chen B, Tang Y, Wang S L, Li Y X and Dai J R. 2018. Fully automatic and robust segmentation of the clinical target volume for radiotherapy of breast cancer using big data and deep learning. Physica Medica, 50: 13-19[DOI:10.1016/j.ejmp.2018.05.006]
Michoux N, Van Den Broeck S, Lacoste L, Fellah L, Galant C, Berlière M and Leconte I. 2015. Texture analysis on MR images helps predicting non-response to NAC in breast cancer. BMC Cancer, 15(1): #574[DOI:10.1186/s12885-015-1563-8]
Newitt D and Hylton N. 2016. Multi-center breast DCE-MRI data and segmentations from patients in the I-SPY 1/ACRIN 6657 trials[EB/OL]. [2020-02-08].https://doi.org/10.7937/K9/TCIA.2016.HdHpgJLKhttps://doi.org/10.7937/K9/TCIA.2016.HdHpgJLK
Oktay O, Schlemper J, Folgoc L L, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla N Y, Kainz B, Glocker B and Rueckert D. 2018. Attention U-Net: learning where to look for the pancreas[EB/OL]. [2020-02-08].https://arxiv.org/pdf/1804.03999.pdfhttps://arxiv.org/pdf/1804.03999.pdf
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical image computing and computer-assisted intervention. Munich, Germany: Springer: 234-241[DOI:10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Sinha A and Dolz J. 2021. Multi-scale self-guided attention for medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 25(1): 121-130[DOI:10.1109/JBHI.2020.2986926]
Sung H, Ferlay J, Siegel R L, Laversanne M, Soerjomataram I, Jemal A and Bray F. 2021. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 71(3): 209-249[DOI:10.3322/caac.21660]
Vaidya J S, Massarut S, Vaidya H J, Alexander E C, Richards T, Caris J A, Sirohi B, Jeffrey S and Tobias J S. 2018. Rethinking neoadjuvant chemotherapy for breast cancer. BMJ, 360: #5913[DOI:10.1136/bmj.j5913]
Xu Y, Wang Y X, Yuan J, Cheng Q, Wang X D and Carson P L. 2019. Medical breast ultrasound image segmentation by machine learning. Ultrasonics, 91: 1-9[DOI:10.1016/j.ultras.2018.07.006]
Zaffino P, Ciardo D, Raudaschl P, Fritscher K, Ricotti R, Alterio D, Marvaso G, Fodor C, Baroni G, Amato F, Orecchia R, Jereczek-Fossa B A, Sharp G C and Spadea M F. 2018. Multi atlas based segmentation: should we prefer the best atlas group over the group of best atlases? Physics in Medicine&Biology, 63(12): #12 NT01[DOI:10.1088/1361-6560/aac712]
Zebari D A, Zeebaree D Q, Abdulazeez A M, Haron H and Hamed H N A. 2020. Improved threshold based and trainable fully automated segmentation for breast cancer boundary and pectoral muscle in mammogram image. IEEE Access, 8: 203097-203116[DOI:10.1109/ACCESS.2020.3036072]
Zhang J, Saha A, Zhu Z and Mazurowski M A. 2018. Hierarchical convolutional neural networks for segmentation of breast tumors in MRI with application to radiogenomics. IEEE Transactions on MedicalImaging, 38(2): 435-447[DOI:10.1109/TMI.2018.2865671]
Zhang L, Mohamed A A, Chai R, Guo Y, Zheng B J and Wu S S. 2020. Automated deep learning method for whole-breast segmentation in diffusion-weighted breast MRI. Journal of Magnetic Resonance Imaging, 51(2): 635-643[DOI:10.1002/jmri.26860]
Zhao H S, Shi J P, Qi X J, Wang X G and Jia J Y. 2017. Pyramid scene parsing network//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 6230-6239[DOI:10.1109/CVPR.2017.660]
相关作者
相关机构