深层聚合残差密集网络的超声图像左心室分割
Left ventricular segmentation on ultrasound images using deep layer aggregation for residual dense networks
- 2020年25卷第9期 页码:1930-1942
收稿:2019-11-29,
修回:2020-3-6,
录用:2020-3-13,
纸质出版:2020-09-16
DOI: 10.11834/jig.190552
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-11-29,
修回:2020-3-6,
录用:2020-3-13,
纸质出版:2020-09-16
移动端阅览
目的
2
超声图像是临床医学中应用最广泛的医学图像之一,但左心室超声图像一般具有强噪声、弱边缘和组织结构复杂等问题,其图像分割难度较大。临床上需要一种效率高、质量好的超声图像左心室分割算法。本文提出一种基于深层聚合残差密集网络(deep layer aggregation for residual dense network,DLA-RDNet)的超声图像左心室分割算法。
方法
2
对获取的超声图像进行形态学操作,定位目标区域,得到目标图像。构建残差密集网络(residual dense network,RDNet)用于提取图像特征,并将RDNet得到的层次信息通过深层聚合(deep layer aggregation,DLA)的方式紧密融合到一起,得到分割网络DLA-RDNet,用于实现对超声图像左心室的精确分割。通过深监督(deep supervision,DS)方式为网络剪枝,简化网络结构,提升网络运行速度。
结果
2
数据测试集的实验结果表明,所提算法平均准确率为95.68%,平均交并比为97.13%,平均相似性系数为97.15%,平均垂直距离为0.31 mm,分割轮廓合格率为99.32%。与6种分割算法相比,所提算法的分割精度更高。在测试阶段,每幅图像仅需不到1 s的时间即可完成分割,远远超出了专业医生的分割速度。
结论
2
提出了一种深层聚合残差密集神经网络对超声图像左心室进行分割,通过主、客观对比实验表明本文算法的有效性,能够较对比方法更实时准确地对超声图像左心室进行分割,符合临床医学中超声图像左心室分割的需求。
Objective
2
Ultrasound images are widely used in clinical medicine. Compared with other medical imaging technologies
ultrasound(US) images are noninvasive
emit non-ionizing radiation
and are relatively cheap and simple to operate. To assess whether a heart is healthy
the ejection fraction is measured
and the regional wall motion is assessed on the basis of identifying the endocardial border of the left ventricle. Generally
cardiologists analyze and segment ultrasound images in a manual or semiautomatic manner to identify the endocardial border of the left ventricle on ultrasound images. However
these segmentation methods have some disadvantages. On the one hand
they are cumbersome and time-consuming tasks
and these ultrasound images can only be segmented by the professional clinicians. On the other hand
the images must be resegmented for different heart disease patients. These problems can be solved by automatic segmentation systems. Unfortunately
affected by ultrasound imaging device and complex heart structure
left ventricular segmentation suffers from the following challenges: first
false edges lead to incorrect segmentation results because the gray scale of the trabecular and mastoid muscles is similar to the myocardial gray scale. Second
the shapes of the left ventricular heart slice are irregular under the influence of the atrium. Third
the accurate positions of the left ventricles are difficult to obtain from ultrasound images because the gray value of the edges is almost the same with that of the myocardium and the tissues surrounding the left heart (such as fats and lungs). Fourth
ultrasound imaging devices produce substantial noise
which affects the quality of ultrasound images; thus
the resolution of ultrasound images is low and thus not conducive to ventricular structure segmentation. In recent years
algorithms for left ventricular segmentation have considerably improved; however
some problems remain. Compared with traditional segmentation methods
deep learning-based methods are more advanced
but some useful original information is lost when images are processed for downsampling. In addition
these methods hardly recognize the weak edges on ultrasound images
resulting in large errors in edge segmentation. Moreover
their segmentation accuracy is low because of substantial noise on ultrasound images. Considering the abovementioned challenges and problems
this study proposes the use of deep layer aggregation for residual dense networks(DLA-RDNet) to identify the left ventricle endocardial border on two-dimensional ultrasound images.
Method
2
The proposed method includes three parts: image preprocessing
neural network structure
and network optimization. First
the dataset must match the neural network after preprocessing the ultrasound images. This part includes two steps. In the first step
we locate the ventricle on ultrasound images in advance on the basis of prior information to avoid the interference of other tissues and organs. The second step is the expansion of the dataset to prevent overfitting of the network training. Second
a new segmentation network is proposed. On the one hand
we adopt a network connection method called deep layer aggregation(DLA) to make the shallow and deep feature information of images more closely integrated. Therefore
less detailed information is lost in the downsampling and upsampling processes. On the other hand
we redesign the downsampling network(RDNet). Combining the advantages of ResNet and DenseNet
we propose a residual dense network
which allows the downsampling process to retain additional useful information. Third
we optimize the neural network. For the redundant part of the network
we use the deep supervision(DS) method for pruning. Consequently
we simplify the network structure and improve the running speed of the neural network. Furthermore
the network loss function is defined by the combination of binary cross entropy and Dice. We use a sigmoid function to achieve pixel-level classification. Finally
the design of the segmentation network is completed.
Result
2
Experimental results on the test dataset show that the average accuracy of the algorithm is 95.68%
the average cross ratio is 97.13%
Dice is 97.15%
the average vertical distance is 0.31 mm
and the contour yield is 99.32%. Compared with the six segmentation algorithms
the proposed algorithm achieves higher segmentation precision in terms of the recognition of the left ventricle in ultrasound images.
Conclusion
2
A deep layer aggregation for residual dense networks is proposed to segment the left ventricle in ultrasound images. Through subjective and objective evaluations
the effectiveness of the proposed algorithm is verified. The algorithm can accurately segment the left ventricle in ultrasound images in real time
and the segmentation results can meet the strict requirements of left ventricular segmentation in clinical medicine.
Avendi M R, Kheradvar A and Jafarkhani H. 2016. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Medical Image Analysis, 30:108-119[DOI:10.1016/j.media.2016.01.005]
Badrinarayanan V, Kendall A and Cipolla R. 2017. SegNet:a deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481-2495[DOI:10.1109/TPAMI.2016.2644615]
Chen L C, Papandreou G, Kokkinos I, Murphy K and Yuille A L. 2014. Semantic image segmentation with deep convolutional nets and fully connected CRFs[EB/OL].[2020-02-20] . https://arxiv.org/pdf/1412.7062.pdf https://arxiv.org/pdf/1412.7062.pdf
Chen L C, Papandreou G, Kokkinos I, Murphy K and Yuille A L. 2018a. DeepLab:semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834-848[DOI:10.1109/TPAMI.2017.2699184]
Chen L C, Papandreou G, Schroff F and Adam H. 2017. Rethinking atrous convolution for semantic image segmentation[EB/OL].[2020-02-20] . https://arxiv.org/pdf/1706.05587.pdf https://arxiv.org/pdf/1706.05587.pdf
Chen L C, Zhu Y K, Papandreou G, Schroff F and Adam H. 2018b. Encoder-decoder with atrous separable convolution for semantic image segmentation//Proceedings of the 15th European Conference on Computer Vision. Munich: Springer: 801-818[ DOI:10.1007/978-3-030-01234-2_49 http://dx.doi.org/10.1007/978-3-030-01234-2_49 ]
He K M, Gkioxari G, Dollár P and Girshick R. 2017. Mask R-CNN//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice: IEEE: 2961-2969[ DOI:10.1109/ICCV.2017.322 http://dx.doi.org/10.1109/ICCV.2017.322 ]
He K M, Zhang X Y, Ren S Q and SunJ. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE: 770-778[ DOI:10.1109/CVPR.2016.90 http://dx.doi.org/10.1109/CVPR.2016.90 ]
Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 4700-4708[ DOI:10.1109/CVPR.2017.243 http://dx.doi.org/10.1109/CVPR.2017.243 ]
Jégou S, Drozdzal M, Vazquez D, Romero A and Bengio Y. 2017. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu: IEEE: 11-19[ DOI:10.1109/CVPRW.2017.156 http://dx.doi.org/10.1109/CVPRW.2017.156 ]
Kass M, Witkin A and Terzopoulos D. 1998. Snakes:active contour models. International Journal of Computer Vision, 1(4):321-331[DOI:10.1007/BF00133570]
Lee C Y, Xie S, Gallagher P, Zhang Z and Tu Z. 2015. Deeply-supervised nets//Proceedings of the 18th International Conference on Artificial Intelligence and Statistics. San Diego: AISTATS: 562-570
Lin G S, Milan A, Shen C H and Reid I. 2017a. RefineNet: multi-path refinement networks for high-resolution semantic segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 1925-1934[ DOI:10.1109/CVPR.2017.549 http://dx.doi.org/10.1109/CVPR.2017.549 ]
Lin T Y, Dollár P, Girshick R, He K M, Hariharan B and Belongie S. 2017b. Feature pyramid networks for object detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 2117-2125[ DOI:10.1109/CVPR.2017.106 http://dx.doi.org/10.1109/CVPR.2017.106 ]
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE: 3431-3440[ DOI:10.1109/CVPR.2015.7298965 http://dx.doi.org/10.1109/CVPR.2015.7298965 ]
Ma Y R, Wang L, Ma Y D, Dong M, Du S Q and Sun X G. 2016. An SPCNN-GVF-based approach for the automatic segmentation of left ventricle in cardiac cine MR images. International Journal of Computer Assisted Radiology and Surgery, 11(11):1951-1964[DOI:10.1007/s11548-016-1429-9]
Noble J A and Boukerroui D. 2006. Ultrasound image segmentation:a survey. IEEE Transactions on Medical Imaging, 25(8):987-1010[DOI:10.1109/TMI.2006.877092]
Peng C, Zhang X Y, Yu G, Luo G M and Sun J. 2017. Large kernel matters-Improve semantic segmentation by global convolutional network//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 4353-4361[ DOI:10.1109/CVPR.2017.189 http://dx.doi.org/10.1109/CVPR.2017.189 ]
Petitjean C and Dacher J N. 2011. A review of segmentation methods in short axis cardiac MR images. Medical Image Analysis, 15(2):169-184[DOI:10.1016/j.media.2010.12.004]
Porshnev S V, Mukhtarov A A, Bobkova A O, Zyuzin V V and Bobkov V V. 2016. The study of applicability of the decision tree method for contouring of the left ventricle area in echographic video data//Proceedings of the 5th International Conference on Analysis of Images, Social Networks and Texts. Yekaterinburg: CEUR-WS: 248-258
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich: Springer: 234-241[ DOI:10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Yang C, Wu W G, Su Y Q and Zhang S X. 2017. Left ventricle segmentation via two-layer level sets with circular shape constraint. Magnetic Resonance Imaging, 38:202-213[DOI:10.1016/j.mri.2017.01.011]
Yu F and Koltun V. 2015. Multi-scale context aggregation by dilated convolutions[EB/OL].[2020-02-20] . https://arxiv.org/pdf/1511.07122.pdf https://arxiv.org/pdf/1511.07122.pdf
Yu F, Wang D Q, Shelhamer E and Darrell T. 2018. Deep layer aggregation//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE: 2403-2412[ DOI:10.1109/CVPR.2018.00255 http://dx.doi.org/10.1109/CVPR.2018.00255 ]
Zhang Y L, Tian Y P, Kong Y, Zhong B N and Fu Y. 2018. Residual dense network for image super-resolution//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE: 2472-2481[ DOI:10.1109/CVPR.2018.00262 http://dx.doi.org/10.1109/CVPR.2018.00262 ]
Zhao H S, Shi J P, Qi X J, Wang X G and Jia J Y. 2017. Pyramid scene parsing network//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 2881-2890[ DOI:10.1109/CVPR.2017.660 http://dx.doi.org/10.1109/CVPR.2017.660 ]
Zhou Z W, Siddiquee M M R, Tajbakhsh N and Liang J M. 2018. UNet++: a nested U-net architecture for medical image segmentation//Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, Tavares J M R S, Bradley A, Papa J P, Belagiannis V, Nascimento J C, Lu Z, Conjeti S, Moradi M, Greenspan H and Madabhushi A, eds. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham: Springer: 3-11[ DOI:10.1007/978-3-030-00889-5_1 http://dx.doi.org/10.1007/978-3-030-00889-5_1 ]
Zyuzin V, Sergey P, Mukhtarov A, Chumarnaya T, Solovyova O, Bobkova A and Myasnikov V. 2018. Identification of the left ventricle endocardial border on two-dimensional ultrasound images using the convolutional neural network Unet//2018 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). Yekaterinburg: IEEE: 76-78[ DOI:10.1109/USBEREIT.2018.8384554 http://dx.doi.org/10.1109/USBEREIT.2018.8384554 ]
相关作者
相关机构
京公网安备11010802024621