多尺度判别条件生成对抗网络的前列腺MRI图像分割方法
Prostate MRI segmentation by using conditional generative adversarial networks with multi-scale discriminators
- 2019年24卷第9期 页码:1581-1587
收稿:2018-12-04,
修回:2019-4-3,
纸质出版:2019-09-16
DOI: 10.11834/jig.180664
移动端阅览

浏览全部资源
扫码关注微信
收稿:2018-12-04,
修回:2019-4-3,
纸质出版:2019-09-16
移动端阅览
目的
2
由MRI(magnetic resonance imaging)得到的影像具有分辨率高、软组织对比好等优点,使得医生能更精确地获得需要的信息,精确的前列腺MRI分割是计算机辅助检测和诊断算法的必要预处理阶段。因此临床上需要一种自动或半自动的前列腺分割算法,为各种各样的临床应用提供具有鲁棒性、高质量的结果。提出一种多尺度判别条件生成对抗网络对前列腺MRI图像进行自动分割以满足临床实践的需求。
方法
2
提出的分割方法是基于条件生成对抗网络,由生成器和判别器两部分组成。生成器由类似U-Net的卷积神经网络组成,根据输入的MRI生成前列腺区域的掩膜;判别器是一个多尺度判别器,同一网络结构,输入图像尺寸不同的两个判别器。为了训练稳定,本文方法使用了特征匹配损失。在网络训练过程中使用对抗训练机制迭代地优化生成器和判别器,直至判别器和生成器同时收敛为止。训练好的生成器即可完成前列腺MRI分割。
结果
2
实验数据来自PROMISE12前列腺分割比赛和安徽医科大学第一附属医院,以Dice相似性系数和Hausdorff距离作为评价指标,本文算法的Dice相似性系数为88.9%,Hausdorff距离为5.3 mm,与U-Net、DSCNN(deeply-supervised convolutional neured network)等方法相比,本文算法分割更准确,鲁棒性更高。在测试阶段,每幅图像仅需不到1 s的时间即可完成分割,超出了专门医生的分割速度。
结论
2
提出了一种多尺度判别条件生成对抗网络来分割前列腺,从定量和定性分析可以看出本文算法的有效性,能够准确地对前列腺进行分割,达到了实时分割要求,符合临床诊断和治疗需求。
Objective
2
Information on the size
shape
and location of the prostate relative to adjacent organs is important in surgical planning for prostatectomy
radiation therapy
and emerging minimally invasive therapies. The images obtained by MRI (magnetic resonance imaging) have the advantages of high resolution and good soft tissue contrast
thereby enabling doctors to obtain the required information accurately. Accurate prostate MRI segmentation is an essential pre-processing task for computer-aided detection and diagnostic algorithms. The segmentation of the prostate in MR is challenging because the prostate shows a wide variety of morphological changes
as well as the contrast of the prostate and adjacent blood vessels
bladder
urethra
rectum
and seminal blood vessels. It also has inherently complex changes in intensity. However
manual segmentation from MR images is time consuming and subjective to limited reproducibility. It heavily depends on experience and has large inter- and intra-observer variations. Consequently
an automated or semi-automated prostate segmentation algorithm that provides robust and high-quality results for a wide variety of clinical applications is required. Therefore
an MRI-conditional generative adversarial network with multi-scale discriminators is proposed to automatically segment prostate MRI to satisfy the requirements of clinical practice.
Method
2
The proposed segmentation method is based on a conditional generative adversarial network
which consists of a generator and a discriminator. The generator inputs MRI and noise and performs downsampling after a series of two-stride convolution operations. It then performs upsampling after a series of half-stride deconvolution operation resizing to input size. The purpose of the generator composed of the convolutional neural network similar to U-Net is to model a mapping function from the MRI to the prostate region. We propose a multi-scale discriminator with the same structure but different input sizes. The discriminator with the smallest input size has the largest receptive field
which has a global view of the image and can guide the generator to generate a global continuous prostate region. The discriminator with a large input size guides the generator to generate fin details
such as prostate boundary. The structure of the discriminators inherits the patchGAN in pix2pix
which is mapped from an input to an
N
×
N
array of outputs
where each element indicates whether the corresponding patch in the image is true or false. In addition
to obtain a stable training
the proposed method uses feature matching loss
which extracts the feature map of the actual image and the generative image from the convolutional network to define the loss function. The network is trained by minimizing the feature loss function
and the difference between the generative image and the actual image is learned. Thus
the generative image and the actual image are more similar in feature. The adversarial training mechanism is used in the training process of the network to iteratively optimize the generator and the discriminator until they converge simultaneously. The generator can be considered as a prostate segmentation network after training.
Result
2
The experimental data are obtained from PROMISE12 prostate segmentation challenge and the First Affiliated Hospital of Anhui Medical University. Some of the images are used as training
and some images are used as test. The Dice similarity coefficient and Hausdorff distance are used as evaluation indicators. The Dice similarity coefficient is 88.9%
and Hausdorff distance is 5.3 mm. Our results show that the proposed algorithm is more accurate and robust than U-Net
DSCNN(deeply-supervised convolutional neured network)
and other methods. We also compare the segmentation time. During the test phase
each picture is obtained at less than one second to complete the segmentation beyond the speed of the specialist doctor.
Conclusion
2
A conditional generative adversarial network with multi-scale discriminators is proposed to segment prostate MRI. The qualitative and quantitative experiments show the effectiveness of the proposed algorithm. This method can effectively improve the robustness of prostate segmentation. More importantly
it satisfies real-time segmentation requirements and can provide a basis for clinical diagnosis and treatment. Therefore
the proposed model is highly appropriate for the clinical segmentation of prostate MRI.
He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 770-778.[ DOI: 10.1109/CVPR.2016.90 http://dx.doi.org/10.1109/CVPR.2016.90 ]
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: ACM, 2012: 1097-1105.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL] .[2018-12-01]. https://arXiv.org/pdf/1409.1556.pdf https://arXiv.org/pdf/1409.1556.pdf .
Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 1-9.[ DOI: 10.1109/CVPR.2015.7298594 http://dx.doi.org/10.1109/CVPR.2015.7298594 ]
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 3431-3440.[ DOI: 10.1109/CVPR.2015.7298965 http://dx.doi.org/10.1109/CVPR.2015.7298965 ]
Zhan S, Liang Z C, Xie D D. Deconvolutional neural network for prostate MRI segmentation[J]. Journal of Image and Graphics, 2017, 22(4):516-522.
詹曙, 梁植程, 谢栋栋.前列腺磁共振图像分割的反卷积神经网络方法[J].中国图象图形学报, 2017, 22(4):516-522. [DOI:10.11834/jig.20170411]
Shi Y G, Cheng K, Liu Z W. Segmentation of hippocampal subfieldsby using deep learning and support vector machine[J]. Journal of Image and Graphics, 2018, 23(4):542-551.
时永刚, 程坤, 刘志文.结合深度学习和支持向量机的海马子区图像分割[J].中国图象图形学报, 2018, 23(4):542-551. [DOI:10.11834/jig.170431]
Liu Y P, Cai W L, Hong G B, et al. Automatic segmentation of shoulder joint in MRI by using patch-wise and full-image fully convolutional networks[J]. Journal of Image and Graphics, 2018, 23(10):1558-1570.
刘云鹏, 蔡文立, 洪国斌, 等.应用图像块和全卷积神经网络的肩关节MRI自动分割[J].中国图象图形学报, 2018, 23(10):1558-1570. [DOI:10.11834/jig.180044]
Chen L C, Papandreou G, Kokkinos I, et al. DeepLab:semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4):834-848.[DOI:10.1109/TPAMI.2017.2699184]
Ren S Q, He K M, Girshick R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 39(6):1137-1149.[DOI:10.1109/TPAMI.2016.2577031]
Drozdzal M, Chartrand G, Vorontsov E, et al. Learning normalized inputs for iterative estimation in medical image segmentation[J]. Medical Image Analysis, 2018, 44:1-13.[DOI:10.1016/j.media.2017.11.005]
Milletari F, Navab N, Ahmadi S A. V-Net: fully convolutional neural networks for volumetric medical image segmentation[C]//Proceedings of the Fourth International Conference on 3D Vision. Stanford, USA: IEEE, 2016: 565-571.[ DOI: 10.1109/3DV.2016.79 http://dx.doi.org/10.1109/3DV.2016.79 ]
Zhu Q K, Du B, Turkbey B, et al. Deeply-supervised CNN for prostate segmentation[C]//Proceedings of 2017 International Joint Conference on Neural Networks. Anchorage, USA: IEEE, 2017: 178-184.[ DOI: 10.1109/IJCNN.2017.7965852 http://dx.doi.org/10.1109/IJCNN.2017.7965852 ]
Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014: 2672-2680.
Isola P, Zhu J Y, Zhou T H, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 5967-5976.[ DOI: 10.1109/CVPR.2017.632 http://dx.doi.org/10.1109/CVPR.2017.632 ]
Mao X D, Li Q, Xie H R, et al. Least squares generative adversarial networks[C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 2813-2821.[ DOI: 10.1109/ICCV.2017.304 http://dx.doi.org/10.1109/ICCV.2017.304 ]
Salimans T, Goodfellow I, Zaremba W, et al. Improved techniques for training GANs[C]//Proceedings of the 30th Annual Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates, Inc., 2017
Tian Z Q, Liu L Z, Zhang Z F, et al. PSNet:prostate segmentation on MRI based on a convolutional neural network[J]. Journal of Medical Imaging, 2018, 5(2):#021208.[DOI:10.1117/1.JMI.5.2.021208]
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation[C]//Proceedings of the 18th International Conference Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer, 2015: 234-241.[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Liao S, Gao Y Z, Oto A, et al. Representation learning: a unified deep learning framework for automatic prostate MR segmentation[C]//Proceedings of the 16th International Conference on Medical Image Computing and Computer-Assisted Intervention. Nagoya, Japan: Springer, 2013: 254-261.[ DOI: 10.1007/978-3-642-40763-5_32 http://dx.doi.org/10.1007/978-3-642-40763-5_32 ]
相关作者
相关机构
京公网安备11010802024621