融合型UNet++网络的超声胎儿头部边缘检测
Ultrasound fetal head edge detection using fusion UNet++
- 2020年25卷第2期 页码:366-377
收稿:2019-06-06,
修回:2019-9-3,
录用:2019-9-10,
纸质出版:2020-02-16
DOI: 10.11834/jig.190242
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-06-06,
修回:2019-9-3,
录用:2019-9-10,
纸质出版:2020-02-16
移动端阅览
目的
2
超声胎儿头部边缘检测是胎儿头围测量的关键步骤,因胎儿头部超声图像边界模糊、超声声影造成图像中胎儿颅骨部分缺失、羊水及子宫壁形成与胎儿头部纹理及灰度相似的结构等因素干扰,给超声胎儿头部边缘检测及头围测量带来一定的难度。本文提出一种基于端到端的神经网络超声图像分割方法,用于胎儿头部边缘检测。
方法
2
以UNet++神经网络结构为基础,结合UNet++最后一层特征,构成融合型UNet++网络。训练过程中,为缓解模型训练过拟合问题,在每一卷积层后接一个空间dropout层。具体思路是通过融合型UNet++深度神经网络提取超声胎儿头部图像特征,通过胎儿头部区域概率图预测,输出胎儿头部语义分割的感兴趣区域。进一步获取胎儿的头部边缘关键点信息,并采用边缘曲线拟合方法拟合边缘,最终测量出胎儿头围大小。
结果
2
针对现有2维超声胎儿头围自动测量公开数据集HC18,以Dice系数、Hausdorff距离(HD)、头围绝对差值(AD)等指标评估本文模型性能,结果Dice系数为98.06%,HD距离为1.21±0.69 mm,头围测量AD为1.84±1.73 mm。在妊娠中期测试数据中,Dice系数为98.24%,HD距离为1.15±0.59 mm,头围测量AD为1.76±1.55 mm。在生物医学图像分析平台Grand Challenge上HC18数据集已提交结果中,融合型UNet++的Dice系数排在第3名,HD排在第2名,AD排在第10名。
结论
2
与经典超声胎儿头围测量方法及已有的机器学习方法应用研究相比,融合型UNet++能有效克服超声边界模糊、边缘缺失等干扰,精准分割出胎儿头部感兴趣区域,获取边缘关键点信息。与现有神经网络框架相比,融合型UNet++能充分利用上下文相关信息与局部定位功能,在妊娠中期的头围测量中,本文方法明显优于其他方法。
Objective
2
Ultrasound fetal head circumference measurement is crucial for monitoring fetus growth and estimating the gestational age. Computer-aided measurement of fetal head circumference is valuable for sonographers who are short of experiments in ultrasound examinations. Through computer-aided measurement
they can further accurately detect fetal head edge and quickly finish an examination. Fetal head edge detection is necessary for the automatic measurement of fetal head circumference. Ultrasound fetal head image boundary is fuzzy
and the gray scale of fetal head is similar to the mother's abdominal tissue
especially in the first trimester. Ultrasound shadow leads to the loss of head edge and incomplete fetal head in the image
which brings certain difficulties in detecting the complete fetal head edge and fit head ellipse. The structures of the amniotic fluid and uterine wall are similar to the head texture and gray scale
often leading to misclassification of this part as fetal head. All these factors result in challenges to ultrasound fetal head edge detection. Therefore
we propose a method for detecting the ultrasound fetal head edge by using convolutional neural network to segment the fetal head region end-to-end.
Method
2
The model proposed in this paper is based on UNet++. In deep supervised UNet++
every output is different and can provide a predicted result of the region of interest
but only the best predicted result will be used to predict the region of fetal head. Generally
the output results increase in accuracy from left to right. Four feature blocks exist before four outputs of UNet++. The left feature contains location information
and the right one contains sematic information. To utilize the feature map before outputs fully
we fuse them by concatenation and further extract fused features. The improved model is named Fusion UNet++. To prevent overfitting
we introduce spatial dropout after each convolutional layer instead of standard dropout
which extends the dropout value across the entire feature map. The idea of fetal head circumference measurement is as follows:first
we use Fusion UNet++ to learn the features of 2D ultrasound fetal head image and obtain the semantic segmentation result of the fetal head by using fetal head probability map. Second
on the basis of the image segmentation result
we extract the fetal head edge by using an edge detection algorithm and use the direct least square ellipse fitting method to fit the head contour. Finally
the fetal head circumference can be calculated using the ellipse circumference formula.
Result
2
The open dataset of the automated measurement of fetal head circumference of the 2D ultrasound image named HC18 on Grand Challenges contains the first
second
and third trimester images of fetal heads. All fetal head images are the standard plane of measuring fetal head circumference. In the HC18 dataset
999 2D ultrasound images have annotations of fetal head circumference in the train set
and 335 2D ultrasound fetal head images have no annotations in the test set. We use the train set to train the convolutional neural network and submit the predicted results of the test set to participate in the model evaluation on HC18
Grand Challenges. We use the Dice coefficient
Hausdorff distance (HD)
and absolute difference (AD) as assessment indexes to evaluate the proposed method quantitatively. With the proposed method
for the dataset of fetal head images for all three trimesters
the Dice coefficient of the fetal head segmentation is 98.06%
the HD is 1.21±0.69 mm
and the AD of the fetal head circumference measurement is 1.84±1.73 mm. The skull in the second trimester is visible and appears as a bright structure; it is invisible in the first trimester and visible but incomplete in the third trimester. Seeing the complete skull is difficult in the first and third trimesters; thus
the measurement result of the fetal head circumference in the second trimester is the best among all trimesters. Most algorithms measure the fetal head circumference only in the second trimester or in the second and third trimester fetal head ultrasound images. For the second trimester
the Dice coefficient of the fetal head segmentation is 98.24%
the HD is 1.15±0.59 mm
and the AD of the fetal head circumference measurement is 1.76±1.55 mm. On the basis of the results presented in the open test set
our Dice ranked the 3rd
HD is the 2nd
and AD is the 10th.
Conclusion
2
In comparison with the traditional and machine learning methods
the proposed method can effectively overcome the interference of fuzzy boundary and lack of edge and can accurately segment the fetal head region. In comparison with existing neural network methods
the proposed method surpasses the other methods in the second trimester of pregnancy in fetal head segmentation and head circumference measurement. The proposed method achieves the state-of-the-art results of fetal head segmentation.
Carneiro G, Georgescu B, Good S and Comaniciu D. 2008. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree. IEEE Transactions on Medical Imaging, 27(9):1342-1355[DOI:10.1109/TMI.2008.928917]
Chen K, Li S L and Tang P. 2009. A method for fetal head ellipse detection in ultrasound image. Journal of Image and Graphics, 14(12):2478-2482
陈凯, 李胜利, 唐娉. 2009.超声图像胎儿颅骨椭圆自动检测方法.中国图象图形学报, 14(12):2478-2482[DOI:10.11834/jig.20091208]
Duda R O and Hart P E. 1972. Use of the hough transformation to detect lines and curves in pictures. Communications of the ACM, 15(1):11-15[DOI:10.1145/361237.361242]
Fitzgibbon A, Pilu M and Fisher R B.1999. Direct least square fitting of ellipses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5):476-480[DOI:10.1109/34.765658]
Foi A, Maggioni M, Pepe A, Rueda S, Noble J A, Papageorghiou A T, and Tohkaa J. 2014. Difference of Gaussians revolved along elliptical paths for ultrasound fetal head segmentation. Computerized Medical Imaging and Graphics 38(8):774-784[DOI:10.1016/j.compmedimag.2014.09.006]
Hartigan J A and Wong M A. 1979. Algorithm AS 136:A K -means clustering algorithm. Journal of the Royal Statistical Society. Series C, 28(1):100-108[DOI:10.2307/2346830]
Kim H P, Lee S M, Kwon J Y, Park Y, Kim K C and Seo J K. 2018. Automatic evaluation of fetal head biometry from ultrasound images using machine learning[EB/OL].[2019-05-04] . https://arxiv.org/pdf/1808.06150.pdf https://arxiv.org/pdf/1808.06150.pdf
Li J, Wang Y, Lei B Y, Cheng J Z, Qin J, Wang T F, Li S L and Ni D. 2018. Automatic fetal head circumference measurement in ultrasound using random forest and fast ellipse fitting. IEEE Journal of Biomedical and Health Informatics, 22(1):215-223[DOI:10.1109/JBHI.2017.2703890]
Lu W, Tan J L and Floyd R. 2005. Automated fetal head detection and measurement in ultrasound images by iterative randomized Hough transform. Ultrasound in Medicine and Biology, 31(7):929-936[DOI:10.1016/j.ultrasmedbio.2005.04.002]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: Convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-assisted Intervention. Munich, Germany: Springer: 234-241[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Shelhamer E, Long J and Darrell T. 2017. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):640-651[DOI:10.1109/TPAMI.2016.2572683]
Simonyan K and Zisserman A. 2014. Very deep convolutional networks for large-scale image recognition[EB/OL].[2019-05-04] . https://arxiv.org/pdf/1409.1556.pdf https://arxiv.org/pdf/1409.1556.pdf
Sinclair M, Baumgartner C F, Matthew J, Bai W J, Martinez C J, Li Y W, Smith S, Knight C L, Kainz B, Hajnal J, King A P and Rueckert D. 2018. Human-level performance on automatic head biometrics in fetal ultrasound using fully convolutional neural networks[EB/OL].[2019-05-04] . https://arxiv.org/pdf/1804.09102.pdf https://arxiv.org/pdf/1804.09102.pdf
Srivastava N, Hinton G, Krizhevsky A, Sutskever T and Salakhutdinov R. 2014. Dropout:A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958
Svetnik V, Liaw A, Tong C, Culberson J C, Sheridan R P and Feuston B P. 2003. Random forest:A classification and regression tool for compound classification and QSAR modeling. Journal of Chemical Information and Computer Sciences, 43(6):1947-1958[DOI:10.1021/ci034160g]
Tompson J, Goroshin R, Jain A, LeCun Y and Bregler C. 2015. Efficient object localization using convolutional networks//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE: 648-656[ DOI: 10.1109/CVPR.2015.7298664 http://dx.doi.org/10.1109/CVPR.2015.7298664 ]
van den Heuvel T L A, de Bruijn D, de Korte C L and van Ginneken B. 2018. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One, 13(8):e0200412[DOI:10.1371/journal.pone.0200412]
Wu L Y, Xin Y, Li S L, Wang T F, Heng P A and Ni D. 2017. Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation//Proceedings of 2017 IEEE 14th International Symposium on Biomedical Imaging. Melbourne, VIC, Australia: IEEE: 663-666[ DOI: 10.1109/ISBI.2017.7950607 http://dx.doi.org/10.1109/ISBI.2017.7950607 ]
Zhang L, Ye X J, Lambrou T, Duan W T, Allinson N and Dudley N J. 2016. A supervised texton based approach for automatic segmentation and measurement of the fetal head and femur in 2D ultrasound images. Physics in Medicine and Biology, 61(3):1095-1115[DOI:10.1088/0031-9155/61/3/1095]
Zhang Z L, Zhang X Y, Peng C, Xue X Y and Sun J. 2018. ExFuse: Enhancing feature fusion for semantic segmentation//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 273-288[ DOI: 10.1007/978-3-030-01249-6_17 http://dx.doi.org/10.1007/978-3-030-01249-6_17 ]
Zhou Z W, Siddiquee M M R, Tajbakhsh N and Liang J M. 2018. UNet++: A nested U-Net architecture for medical image segmentation//Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham: Springer: 3-11[ DOI: 10.1007/978-3-030-00889-5_1 http://dx.doi.org/10.1007/978-3-030-00889-5_1 ]
相关作者
相关机构
京公网安备11010802024621