结合残差路径及密集连接的乳腺超声肿瘤分割
Tumor segmentation in breast ultrasound combined with Res paths and a dense connection
- 2021年26卷第3期 页码:633-643
收稿:2020-03-20,
修回:2020-6-29,
录用:2020-7-5,
纸质出版:2021-03-16
DOI: 10.11834/jig.200078
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-03-20,
修回:2020-6-29,
录用:2020-7-5,
纸质出版:2021-03-16
移动端阅览
目的
2
乳腺癌是常见的高发病率肿瘤疾病,早期确诊是预防乳腺癌的关键。为获得肿瘤准确的边缘和形状信息,提高乳腺肿瘤诊断的准确性,本文提出了一种结合残差路径及密集连接的乳腺超声肿瘤分割方法。
方法
2
基于经典的深度学习分割模型U-Net,添加残差路径,减少编码器和解码器特征映射之间的差异。在此基础上,在特征输入层到解码器最后一步之间引入密集块,通过密集块组成从输入特征映射到解码最后一层的新连接,减少输入特征图与解码特征图之间的差距,减少特征损失并保存更有效信息。
结果
2
将本文模型与经典的U-Net模型、引入残差路径的U-Net(U-Net with Res paths)模型在上海新华医院崇明分院乳腺肿瘤超声数据集上进行10-fold交叉验证实验。本文模型的真阳率(true positive,TP)、杰卡德相似系数(Jaccard similarity,JS)和骰子系数(Dice coefficients,DC)分别为0.870 7、0.803 7和0.882 4,相比U-Net模型分别提高了1.08%、2.14%和2.01%;假阳率(false positive,FP)和豪斯多夫距离(Hausdorff distance,HD)分别为0.104 0和22.311 4,相比U-Net模型分别下降了1.68%和1.410 2。在54幅图像的测试集中,评价指标JS > 0.75的肿瘤图像数量的总平均数为42.1,最大值为46。对比实验结果表明,提出的算法有效改善了分割结果,提高了分割的准确性。
结论
2
本文提出的基于U-Net结构并结合残差路径与新的连接的分割模型,改善了乳腺超声肿瘤图像分割的精确度。
Objective
2
Precise segmentation of breast cancer tumors is of great concern. For women
breast cancer is a common tumor disease with a high incidence
and obtaining accurate diagnosis in the early stage of breast cancer has always been the key to preventing breast cancer. Doctors can improve the accuracy of the diagnosis of breast tumors by obtaining accurate information on the edge and shape of the tumor. Common breast imaging techniques include ultrasound imaging
magnetic resonance imaging (MRI)
and X-ray imaging. However
X-ray imaging often causes radiation damage to breast tissue in women
whereas MRI imaging is not only expensive but also needs a longer scanning time. Compared with the two methods above
the ultrasound imaging detection method has the advantages of no radiation damage to tissue
ease of use
imaging the front of any breast
fast imaging speed
and cheap price. However
ultrasound images rely more on professional ultrasound doctors because of problems such as speckle noise and low resolution than other commonly used techniques. Thus
experienced
well-trained doctors are needed in the diagnostic process. In recent years
improving the accuracy of diagnosis by combining medical imaging technology with computer science and technology to segment tumors accurately and help related medical personnel in diagnosis and identification has become a trend. In the past 10 years
various methods
such as thresholding method
clustering-based algorithm
graph-based algorithm
and active contour algorithm
have been used to segment breast tumors on ultrasound images. However
these methods have limited ability to represent features. In the past few years
deep convolutional neural networks have become more widely used in visual recognition tasks. They can automatically find suitable features for target data and tasks. The convolutional network has existed for a long time. However
the hardware environment at that time limited its development because the size of the training set and the size of the network structure parameters require a large amount of computation. Fully convolutional network (FCN) is an effective convolutional neural network for semantic segmentation. It can be trained in an end-to-end and pixel-to-pixel manner. Its input image size is arbitrary
and the output image is a picture with its corresponding size
containing the target information. U-Net is an improvement of the FCN model. It not only solves the above problems but also can make full use of sample image to train a biological medical image well.
Method
2
In this paper
a deep learning segmentation model is proposed based on the U-Net framework
combining the "Res paths" to reduce the difference between the encoder and decoder feature maps
and establish a new connection composed of dense units. The "Res paths" consist of a series of residual units
which are composed of a 3×3 convolution kernel and a 1×1 convolution kernel. The number of residual units is 4
3
2
and 1 in order
set along four "residual paths (Res paths)" in the framework. The new connection is a dense block from the input of feature maps to the decoding part
and the input of each layer concatenated by the output of each previous layer alleviates the loss of feature information and the disappearance of gradient. The dataset from Chongming branch of Xinhua Hospital in Shanghai is applied in this paper. The dataset is obtained by Samsung RS80A color Doppler ultrasound diagnostic instrument (equipped with a high-frequency probe l3-12a). These images obtained from the instrument clearly show the morphology
internal structure
and surrounding tissues of the lesion. All patients from this dataset are female
aged from 24 to 86
in non-pregnancy and lactation
and have no history of radiotherapy
chemotherapy
or endocrine therapy before the examination. Ten-fold cross validation is used
and 538 breast ultrasound tumor images selected from the dataset are randomly divided into 10 cases. In one case
54 breast ultrasound images are tested
and the 484 remaining pictures are used for training. In the experiment
484 images are doubled to 968 images by image augmentation with image data generator. During training
48 pictures of breast cancer tumors are randomly selected for validation. Keras is used to build the model framework. Training the model is started on NVIDIA Titan 1080 GPU utilizing the weights "he_normal" to initialize the parameters of the model. Our proposed model is trained by employing the Adam optimizer
using cross entropy as the loss function
and setting batch size
β
1
β
2
and learning rate to 4
0.9
0.999
and 0.000 1
respectively.
Result
2
The three models are cross-checked 10 times (U-Net
U-Net with Res
and the proposed model) using the same test sample sets
validation samples sets
and training sample sets each time. The first model is the classic U-Net model. The second model adds "residual paths" to the basic network structure of U-Net. The third method
proposed by us
is an improvement on the second method. Based on the second method
a new connection is introduced. The epochs of the three previous models are 80
100
and 120 in order. Compared with the classic U-Net model
the true positive
Jaccard similarity (JS)
and Dice coefficients of the proposed model are 0.870 7
0.803 7
and 0.882 4
respectively
improving by 1.08%
2.14%
and 2.01%
respectively. The indices of false positive and Hausdorff distance are 0.104 and 22.311 4
respectively
decreasing by 1.68% and 1.410 2
respectively. In the test set of every 54 pictures
the total average number of tumor pictures of JS > 0.75 is 42.1 up to a maximum of 46. Experimental results show that the proposed improved algorithm improves the results.
Conclusion
2
The proposed segmentation model based on U-Net network and combining the residual path with the new junction improves the precision of segmentation of breast ultrasound tumor images.
Abramovich F, Sapatinas T and Silverman B W. 1998. Wavelet thresholding via a Bayesian approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(4): 725-749[DOI:10.1111/1467-9868.00151.
Alexiou C, Arnold W, Hulin P, Klein R J, Renz H, Parak F G, Bergemann C and Lubbe A S. 2001. Magnetic mitoxantrone nanoparticle detection by histology, X-ray and MRI after magnetic tumor targeting. Journal of Magnetism and Magnetic Materials, 225(1/2): 187-193[DOI:10.1016/S0304-8853(00)01256-7]
Almajalid R, Shan J, Du Y D and Zhang M. 2018. Development of a deep-learning-based method for breast ultrasound image segmentation//Proceedings of the 17th IEEE International Conference on Machine Learning and Applications. Orlando, USA: IEEE: 1103-1108[ DOI: 10.1109/ICMLA.2018.00179 http://dx.doi.org/10.1109/ICMLA.2018.00179 ]
Bock S, Goppold J and WeißM. 2018. An improvement of the convergence proof of the ADAM-optimizer[EB/OL]. (2018-04-27)[2020-03-09] . https://arxiv.org/pdf/1804.10587.pdf https://arxiv.org/pdf/1804.10587.pdf
Cheng H D, Shan J, Ju W, Guo Y H and Zhang L. 2010. Automated breast cancer detection and classification using ultrasound images: a survey. Pattern Recognition, 43(1): 299-317[DOI:10.1016/j.patcog.2009.05.012]
Ding J R, Huang Z C, Shi M D and Ning C P. 2019. Automatic thyroid ultrasound image segmentation based on u-shaped network//Proceedings of the 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics. Suzhou, China: IEEE: 1-5[ DOI: 10.1109/CISP-BMEI48845.2019.8966062 http://dx.doi.org/10.1109/CISP-BMEI48845.2019.8966062 ]
Freed M, De Zwart J A, Loud J T, El Khouli R H, Myers K J, Greene M H, Duyn J H and Badano A. 2011. An anthropomorphic phantom for quantitative evaluation of breast MRI. Medical Physics, 38(2): 743-753[DOI:10.1118/1.3533899]
Guan S, Khan A A, Sikdar S and Chitnis P V. 2020. Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal. IEEE Journal of Biomedical and Health Informatics, 24(2): 568-576[DOI:10.1109/JBHI.2019.2912935]
He K M, Zhang X Y, Ren S Q and Sun J. 2015. Delving deep into rectifiers: surpassing human-level performance on imagenet classification//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 1026-1034[ DOI: 10.1109/ICCV.2015.123 http://dx.doi.org/10.1109/ICCV.2015.123 ]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Identity mappings in deep residual networks//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 630-645[ DOI: 10.1007/978-3-319-46493-0_38 http://dx.doi.org/10.1007/978-3-319-46493-0_38 ]
Huang G, Liu Z, van der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2261-2269[ DOI: 10.1109/CVPR.2017.243 http://dx.doi.org/10.1109/CVPR.2017.243 ]
Ibtehaz N and Rahman M S. 2020. MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Networks, 121: 74-87[DOI:10.1016/j.neunet.2019.08.025]
Jégou S, Drozdzal M, Vazquez D, Romero A and Bengio Y. 2017. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE: 1175-1183[ DOI: 10.1109/CVPRW.2017.156 http://dx.doi.org/10.1109/CVPRW.2017.156 ]
Ketkar N. 2017. Introduction to keras//Ketkar N, ed. Deep Learning with Python. Berkeley: Apress: 97-111[ DOI: 10.1007/978-1-4842-2766-4_7 http://dx.doi.org/10.1007/978-1-4842-2766-4_7 ]
Li X, Hong Y, Kong D X and Zhang X L. 2019. Automatic segmentation of levator hiatus from ultrasound images using U-net with dense connections. Physics in Medicine and Biology, 64(7): 075015[DOI:10.1088/1361-6560/ab0ef4]
Liu B, Cheng H D, Huang J H, Liu J W, Tang X L and Liu J F. 2010. Fully automatic and segmentation-robust classification of breast tumors based on local texture analysisof ultrasound images. Pattern Recognition, 43(1): 280-298[DOI:10.1016/j.patcog.2009.06.002]
Loizou C P, Pattichis C S, Christodoulou C I, Istepanian R S H, Pantziaris M and Nicolaides A. 2005. Comparative evaluation of despeckle filtering in ultrasound imaging of the carotid artery. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 52(10): 1653-1669[DOI:10.1109/TUFFC.2005.1561621]
Long F N, Zhu X S and Gan J Z. 2018. Ultrasound image segmentation of brachial plexus via convolutional neural networks. Journal of Hefei University of Technology(Natural Science), 41(9): 1191-1195, 1296
龙法宁, 朱晓姝, 甘井中. 2018. 基于卷积神经网络的臂丛神经超声图像分割方法. 合肥工业大学学报(自然科学版), 41(9): 1191-1195, 1296[DOI:10.3969/j.issn.1003-5060.2018.09.007]
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA: IEEE: 3431-3440[ DOI: 10.1109/CVPR.2015.7298965 http://dx.doi.org/10.1109/CVPR.2015.7298965 ]
Lotfollahi M, Gity M, Ye J Y and Far A M. 2018. Segmentation of breast ultrasound images based on active contours using neutrosophic theory. Journal of Medical Ultrasonics, 45(2): 205-212[DOI:10.1007/s10396-017-0811-8]
Luo Y Z, Han S J and Huang Q H. 2016. A novel graph-based segmentation method for breast ultrasound images//Proceedings of 2016 International Conference on Digital Image Computing: Techniques and Applications. Gold Coast, QLD, Australia: IEEE: 1-6[ DOI: 10.1109/DICTA.2016.7796992 http://dx.doi.org/10.1109/DICTA.2016.7796992 ]
Pandian N G, Kreis A, Brockway B, Isner J M, Sacharoff A, Boleza E, Caro R and Muller D. 1988. Ultrasound angioscopy: real-time, two-dimensional, intraluminal ultrasound imaging of blood vessels. The American Journal of Cardiology, 62(7): 493-494[DOI:10.1016/0002-9149(88)90992-7]
Raza S, Chikarmane S A, Neilsen S S, Zorn L M and Birdwell R L. 2008. BI-RADS 3, 4, and 5 lesions: value of US in management-follow-up and outcome. Radiology, 248(3): 773-781[DOI:10.1148/radiol.2483071786]
Rodrigues P S and Giraldi G A. 2011. Improving the non-extensive medical image segmentation based on tsallis entropy. Pattern Analysis and Applications, 14(4): 369-379[DOI:10.1007/s10044-011-0225-y]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Shan J, Cheng H D and Wang Y X. 2012. A novel segmentation method for breast ultrasound images based on neutrosophic l-means clustering. Medical Physics, 39(9): 5669-5682[DOI:10.1118/1.4747271]
Siegel R L, Miller K D and Jemal A. 2016. Cancer statistics, 2016. CA: A Cancer Journal for Clinicians, 66(1): 7-30[DOI:10.3322/caac.21332]
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2015-04-10)[2020-03-09] . https://arxiv.org/pdf/1409.1556.pdf https://arxiv.org/pdf/1409.1556.pdf
Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V and Rabinovich A. 2015. Going deeper with convolutions//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 1-9[ DOI: 10.1109/CVPR.2015.7298594 http://dx.doi.org/10.1109/CVPR.2015.7298594 ]
Wang K J, Ma H, Popoola O P and Li X F. 2010. A novel finger vein pattern extraction method using oriented filtering technology//Proceedings of the 8th World Congress on Intelligent Control and Automation. Jinan, China: IEEE: 6240-6244[ DOI: 10.1109/WCICA.2010.5554393 http://dx.doi.org/10.1109/WCICA.2010.5554393 ]
Xu L, Liu M Y, Shen Z R, Wang H, Liu X W, Wang X, Wang S Y, Li T F, Yu S M, Hou M, Guo J H, Zhang J C and He Y H. 2020. DW-Net: a cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography. Computerized Medical Imaging and Graphics, 80: 101690[DOI:10.1016/j.compmedimag.2019.101690]
Zhang Z X, Liu Q J and Wang Y H. 2018. Road extraction by deep residual u-net. IEEE Geoscience and Remote Sensing Letters, 15(5): 749-753[DOI:10.1109/LGRS.2018.2802944]
Zhu K, Fu Z L and Chen X Q. 2019. Left ventricular segmentation method of ultrasound image based on convolutional neural network. Journal of Computer Applications, 39(7): 2121-2124
朱锴, 付忠良, 陈晓清. 2019. 基于卷积神经网络的超声图像左心室分割方法. 计算机应用, 39(7): 2121-2124[DOI:10.11772/j.issn.1001-9081.2018112321]
相关作者
相关机构
京公网安备11010802024621