Print

发布时间: 2019-09-16
摘要点击次数:
全文下载次数:
DOI: 10.11834/jig.180664
2019 | Volume 24 | Number 9




    医学图像处理    




  <<上一篇 




  下一篇>> 





多尺度判别条件生成对抗网络的前列腺MRI图像分割方法
expand article info 何俊1, 吴从中1, 丁正龙2, 许良凤1, 詹曙1
1. 合肥工业大学计算机与信息学院, 合肥 230009;
2. 安徽信息工程学院, 芜湖 241000

摘要

目的 由MRI(magnetic resonance imaging)得到的影像具有分辨率高、软组织对比好等优点,使得医生能更精确地获得需要的信息,精确的前列腺MRI分割是计算机辅助检测和诊断算法的必要预处理阶段。因此临床上需要一种自动或半自动的前列腺分割算法,为各种各样的临床应用提供具有鲁棒性、高质量的结果。提出一种多尺度判别条件生成对抗网络对前列腺MRI图像进行自动分割以满足临床实践的需求。方法 提出的分割方法是基于条件生成对抗网络,由生成器和判别器两部分组成。生成器由类似U-Net的卷积神经网络组成,根据输入的MRI生成前列腺区域的掩膜;判别器是一个多尺度判别器,同一网络结构,输入图像尺寸不同的两个判别器。为了训练稳定,本文方法使用了特征匹配损失。在网络训练过程中使用对抗训练机制迭代地优化生成器和判别器,直至判别器和生成器同时收敛为止。训练好的生成器即可完成前列腺MRI分割。结果 实验数据来自PROMISE12前列腺分割比赛和安徽医科大学第一附属医院,以Dice相似性系数和Hausdorff距离作为评价指标,本文算法的Dice相似性系数为88.9%,Hausdorff距离为5.3 mm,与U-Net、DSCNN(deeply-supervised convolutional neured network)等方法相比,本文算法分割更准确,鲁棒性更高。在测试阶段,每幅图像仅需不到1 s的时间即可完成分割,超出了专门医生的分割速度。结论 提出了一种多尺度判别条件生成对抗网络来分割前列腺,从定量和定性分析可以看出本文算法的有效性,能够准确地对前列腺进行分割,达到了实时分割要求,符合临床诊断和治疗需求。

关键词

MRI; 前列腺分割; 生成对抗网络; 生成器; 判别器

Prostate MRI segmentation by using conditional generative adversarial networks with multi-scale discriminators
expand article info He Jun1, Wu Congzhong1, Ding Zhenglong2, Xu Liangfeng1, Zhan Shu1
1. School of Computer and Information, Hefei University of Technology, Hefei 230009, China;
2. Anhui Institute of Information Technology, Wuhu 241000, China
Supported by: National Natural Science Foundation of China (61371156)

Abstract

Objective Information on the size, shape, and location of the prostate relative to adjacent organs is important in surgical planning for prostatectomy, radiation therapy, and emerging minimally invasive therapies. The images obtained by MRI (magnetic resonance imaging) have the advantages of high resolution and good soft tissue contrast, thereby enabling doctors to obtain the required information accurately. Accurate prostate MRI segmentation is an essential pre-processing task for computer-aided detection and diagnostic algorithms. The segmentation of the prostate in MR is challenging because the prostate shows a wide variety of morphological changes, as well as the contrast of the prostate and adjacent blood vessels, bladder, urethra, rectum, and seminal blood vessels. It also has inherently complex changes in intensity. However, manual segmentation from MR images is time consuming and subjective to limited reproducibility. It heavily depends on experience and has large inter- and intra-observer variations. Consequently, an automated or semi-automated prostate segmentation algorithm that provides robust and high-quality results for a wide variety of clinical applications is required. Therefore, an MRI-conditional generative adversarial network with multi-scale discriminators is proposed to automatically segment prostate MRI to satisfy the requirements of clinical practice. Method The proposed segmentation method is based on a conditional generative adversarial network, which consists of a generator and a discriminator. The generator inputs MRI and noise and performs downsampling after a series of two-stride convolution operations. It then performs upsampling after a series of half-stride deconvolution operation resizing to input size. The purpose of the generator composed of the convolutional neural network similar to U-Net is to model a mapping function from the MRI to the prostate region. We propose a multi-scale discriminator with the same structure but different input sizes. The discriminator with the smallest input size has the largest receptive field, which has a global view of the image and can guide the generator to generate a global continuous prostate region. The discriminator with a large input size guides the generator to generate fin details, such as prostate boundary. The structure of the discriminators inherits the patchGAN in pix2pix, which is mapped from an input to an N×N array of outputs, where each element indicates whether the corresponding patch in the image is true or false. In addition, to obtain a stable training, the proposed method uses feature matching loss, which extracts the feature map of the actual image and the generative image from the convolutional network to define the loss function. The network is trained by minimizing the feature loss function, and the difference between the generative image and the actual image is learned. Thus, the generative image and the actual image are more similar in feature. The adversarial training mechanism is used in the training process of the network to iteratively optimize the generator and the discriminator until they converge simultaneously. The generator can be considered as a prostate segmentation network after training. Result The experimental data are obtained from PROMISE12 prostate segmentation challenge and the First Affiliated Hospital of Anhui Medical University. Some of the images are used as training, and some images are used as test. The Dice similarity coefficient and Hausdorff distance are used as evaluation indicators. The Dice similarity coefficient is 88.9%, and Hausdorff distance is 5.3 mm. Our results show that the proposed algorithm is more accurate and robust than U-Net, DSCNN(deeply-supervised convolutional neured network), and other methods. We also compare the segmentation time. During the test phase, each picture is obtained at less than one second to complete the segmentation beyond the speed of the specialist doctor. Conclusion A conditional generative adversarial network with multi-scale discriminators is proposed to segment prostate MRI. The qualitative and quantitative experiments show the effectiveness of the proposed algorithm. This method can effectively improve the robustness of prostate segmentation. More importantly, it satisfies real-time segmentation requirements and can provide a basis for clinical diagnosis and treatment. Therefore, the proposed model is highly appropriate for the clinical segmentation of prostate MRI.

Key words

magnetic resonance imaging (MRI); prostate segmentation; generative adversarial networks (GANs); generator; discriminator

0 引言

近年来,深度学习在分类[1-4]、分割[5-9]、检测[10]等不同任务中展现了优越性能,一些研究者将全卷积网络(FCN)[5]应用到前列腺分割任务中, 取得了比传统分割方法好的效果。Drozdzal等人[11]将FCN作为一个预处理器,用来对输入的未处理图片做正则化处理,然后将正则化后的结果送入FC-ResNet网络,通过迭代地优化,最后得到一个分割网络。Milletari等人[12]直接使用3D卷积并提出一个专为医学图像分割设计的基于Dice的重叠系数(Dice overlap coefficient)的新的目标函数端到端地训练FCN, 解决了背景和前景像素数量不平衡问题,得到了较好的分割结果。Zhu等人[13]基于U-Net框架做了改进,添加了8个额外的深度监督层,在训练过程中所有监督层监督训练过程,使得梯度更好地反向传播,实验结果显示该方法具有较好的性能。以上方法都是基于FCN的,即构建一个基于每个像素的损失函数,然后训练网络,最小化该损失函数。这些损失函数将图像中的每个像素看成是独立于其他像素的,然而事实上相邻像素之间是有联系的,这样就丢失了一些像素间关系以及有用的结构信息, 不能学习到前列腺的外形等先验知识。

与此同时,生成对抗网络(GANs)[14-17]作为一个灵活且有效的框架正在应用于图像生成、超分辨率重建、目标检测等不同任务。GANs由生成器和判别器两部分组成。生成器试图生成近似于真实图片的样本,而判别器试图区分真实的样本和由生成器生成的样本。提出一种基于条件(文中条件为前列腺MRI)的带有多尺度判别器的生成对抗网络来自动地分割前列腺。

pix2pix网络[15]是一个基于条件的可以进行图像到图像翻译的生成对抗网络, 基于该网络架构将判别器改为多尺度判别器,${L_1}$损失函数改为Salimans等人[17]提出的特征匹配损失函数。条件GANs通过对抗损失函数学习前列腺的整体形状,相当于学习一个结构损失,使得前列腺整体的形状能够被网络“意识”到。本文方法框架如图 1所示。

图 1 用于前列腺分割的多尺度判别条件生成对抗网络框架图
Fig. 1 The structure of conditional generative adversarial network with multi-scale discriminators for prostate segmentation

1 本文算法

1.1 条件生成对抗网络

条件生成对抗网络是生成对抗网络的一种变体,生成器$G$和判别器$D$都是以某个额外信息$\mathit{\boldsymbol{y}}$作为条件,$\mathit{\boldsymbol{y}}$可以是任何类型的辅助信息,如对某幅图像的文字描述或某幅图像的其他模态都可作为条件。就生成器而言,条件$\mathit{\boldsymbol{y}}$和先验输入噪声${p_z}\left(z \right)$作为联合隐层表征一起输入;就判别器而言,条件$\mathit{\boldsymbol{y}}$$\mathit{\boldsymbol{z}}$(真实样本)或$G(\mathit{\boldsymbol{z}})$一起输入。条件GANs的目标函数$F$是带有条件的二元极小极大博弈,即

$ \begin{array}{l} \mathop {{\rm{min}}}\limits_G \mathop {{\rm{max}}}\limits_D F\left( {D, G} \right) = {{\rm{E}}_{\mathit{\boldsymbol{x}} \sim {p_{{\rm{data}}}}}}_{\left( \mathit{\boldsymbol{x}} \right)}[{\rm{log}}D\left( {\mathit{\boldsymbol{x|y}}} \right)] + \\ {{\rm{E}}_{\mathit{\boldsymbol{z}} \sim {p_z}(\mathit{\boldsymbol{z}})}}[{\rm{log}}\left( {1 - D\left( {G\left( {\mathit{\boldsymbol{z|y}}} \right)} \right)} \right)] \end{array} $ (1)

式中,$\mathit{\boldsymbol{x}} \sim {p_{{\rm{data}}}}\left(\mathit{\boldsymbol{x}} \right)$表示$\mathit{\boldsymbol{x}}$服从真实数据分布,${{\rm{E}}_{\mathit{\boldsymbol{z}} \sim {p_z}}}(\mathit{\boldsymbol{z}})$表示$\mathit{\boldsymbol{z}}$服从高斯噪声分布。${\rm{E}}$表示求期望。

1.2 多尺度判别器和特征匹配损失

为了更好地捕获前列腺MRI图像的像素变化,生成更精确的前列腺区域,使用多尺度判别器,使得生成器和判别器可以学习空间上距离较短和较长像素间的关系,即学习图像的局部和全局特征。具体方法是将前列腺MRI图像和生成的图像或真实图像缩小两倍,然后将原尺寸和缩小两倍尺寸的真实图像或生成图像分别输入给两个具有相同结构的判别器${D_1}, {D_2}$。其中输入尺寸较小的判别器有较大的感受野,可以引导生成器生成整体连续的前列腺区域;输入尺寸较大的判别器引导生成器生成较细微的区域,如前列腺与其他组织的边界处。多尺度判别器的目标函数为

$ \mathop {{\rm{min}}}\limits_G \mathop {{\rm{max}}}\limits_{{D_1}, {D_2}} [F({D_1}, G) + F({D_2}, G)] $

为了提高训练的稳定性,文献[17]提出了为生成器寻找一个新的目标函数的方法。新的目标函数利用判别器的中间层的输出,使得生成图像的特征与真实图像的特征相匹配。直观上判别器的中间层其实是一个特征提取器,用来区别真实图片和生成图片的特征,这种特征的差异是值得生成器学习的。本文使用的特征匹配损失函数为特征的${{\rm{L}}_{\rm{1}}}$范数,即

$ \begin{array}{l} {L_{\rm{F}}}({D_k}, G) = \sum\limits_{i = 1}^N {\frac{1}{{{M_i}}}\left\| {{{\rm{E}}_{\mathit{\boldsymbol{x}} \sim {p_{{\rm{data}}}}}}_{\left( \mathit{\boldsymbol{x}} \right)}[{f_k}^i\left( \mathit{\boldsymbol{x}} \right)]} \right.} - \\ \;\;\;\;\;\;\;\;\;\;\;{{\rm{E}}_{\mathit{\boldsymbol{z}} \sim {p_{\mathit{\boldsymbol{z}}(\mathit{\boldsymbol{z}})}}}}\left[ {{f_k}^i\left( {G\left( \mathit{\boldsymbol{z}} \right)} \right)} \right]\left\| {_1} \right. \end{array} $ (2)

式中,${{f_k}^i}$表示判别器${D_k}$的第$i$层特征,$N$表示所求特征的层数,${{M_i}}$表示每层特征中的元素个数。综上,总的目标函数为对抗目标函数与特征匹配损失函数之和,即

$ \mathop {{\rm{min}}}\limits_G ((\mathop {{\rm{max}}}\limits_{{D_1}, {D_2}} \sum\limits_{k = 1, 2} {F({D_k}, G)} ) + \beta \sum\limits_{k = 1, 2} {{L_{\rm{F}}}({D_k}, G)} ) $

式中,$β$是超参数,本文实验设置$β$ =10。

1.3 生成器和判别器

本文生成器的目的是输入一幅待分割的前列腺MRI图像,输出前列腺区域的掩膜。生成器的结构是一个编码器—解码器结构。编码器经过一系列卷积操作将前列腺MRI图像进行编码,再经过一系列反卷积,恢复到输入图像大小。生成器结构如图 2所示。

图 2 生成器
Fig. 2 Generator

判别器输入两幅图像,一幅是待分割的MRI图像,另一幅是手工分割的掩膜或生成器生成的掩膜,然后判断输入是手工分割的还是生成器生成的。判别器采用了pix2pix[15]中的patchGAN技术,即将图像分成$N$×$N$块,然后对每个子块进行真假判断,最后将所有结果相加再平均作为最终结果。判别器卷积核尺寸都为3×3,激活函数为PRelu。

2 实验结果与分析

2.1 实验数据集

实验数据来自PROMISE12挑战赛和安徽医科大学第一附属医院,每个样本由前列腺MRI和手工分割掩膜组成,如图 3所示。同时使用水平翻转、图像增强、平移等数据扩增技术增加数据集数量,最后共获取训练数据集1 403幅,测试数据集96幅。实验基于Ubuntu系统和Gefore GTX TITAN X GPU, 本文方法基于Pytorch深度学习框架实现。

图 3 前列腺MRI与手工分割掩膜
Fig. 3 Prostate MRI with manual segmentation mask ((a) Prostate MRI; (b) manual segmentation mask)

2.2 评价标准

评价指标采用Dice相似性系数和Hausdorff距离。Dice相似性系数计算如下

$ DSC\left( {\mathit{\boldsymbol{X}}, \mathit{\boldsymbol{Y}}} \right) = \frac{{2\left( {\left| \mathit{\boldsymbol{X}} \right| \cap \left| \mathit{\boldsymbol{Y}} \right|} \right)}}{{\left| \mathit{\boldsymbol{X}} \right| \cup \left| \mathit{\boldsymbol{Y}} \right|}} $ (3)

式中,$\mathit{\boldsymbol{X}}$表示人工分割的结果,$\mathit{\boldsymbol{Y}}$表示算法分割的结果,$\left| \cdot \right|$表示面积。可以看出$DSC\left({\mathit{\boldsymbol{X}}, \mathit{\boldsymbol{Y}}} \right)$的变化范围为0~1,该值越大,表示分割结果与真实值的重合度越大,分割结果越好。

Hausdorff距离的计算方法如下:设有两个点的集合$\mathit{\boldsymbol{A}} = \{ {a_1}, \ldots, {a_m}\} $$\mathit{\boldsymbol{B}} = \{ {b_1}, \ldots, {b_m}\} $,它们之间的Hausdorff距离定义为

$ H\left( {\mathit{\boldsymbol{A}}, \mathit{\boldsymbol{B}}} \right) = {\rm{max}}\left( {h\left( {\mathit{\boldsymbol{A}}, \mathit{\boldsymbol{B}}} \right), h\left( {\mathit{\boldsymbol{B}}, \mathit{\boldsymbol{A}}} \right)} \right) $ (4)

式中

$ h\left( {\mathit{\boldsymbol{A}}, \mathit{\boldsymbol{B}}} \right) = \mathop {{\rm{max}}}\limits_{a \in \mathit{\boldsymbol{A}}} \mathop {{\rm{min}}}\limits_{b \in \mathit{\boldsymbol{B}}} \left\| {a - b} \right\| $ (5)

$ h\left( {\mathit{\boldsymbol{B}}, \mathit{\boldsymbol{A}}} \right) = \mathop {{\rm{max}}}\limits_{b \in \mathit{\boldsymbol{B}}} \mathop {{\rm{min}}}\limits_{a \in \mathit{\boldsymbol{A}}} \left\| {b - a} \right\| $ (6)

式中,$\left\| \cdot \right\|$表示点集间的距离范式。$H\left({\mathit{\boldsymbol{A}}, \mathit{\boldsymbol{B}}} \right)$越小,分割结果越好,当$\mathit{\boldsymbol{A}} = \mathit{\boldsymbol{B}}$时,$H\left({\mathit{\boldsymbol{A}}, \mathit{\boldsymbol{B}}} \right)$=0。

2.3 训练过程

训练过程中生成的前列腺区域图像如图 4所示。可以看出,生成器在开始时是随机生成图像,随着迭代的增加,生成的越来越好,最后与真实的分割图像相近。训练完成之后,生成器输入前列腺MRI图像即可以生成前列腺区域,从而间接完成前列腺分割任务。

图 4 训练过程中生成的前列腺区域图像
Fig. 4 Images of prostate region generated during training ((a)MRI; (b)Epoch70; (c)Epoch109; (d)Epoch193; (e)truth image)

为了使得训练稳定,本文使用Mao等人[16]在最小二部生成对抗网络(LSGAN)中提出的训练方法。使用Adam优化器,${\beta _1}$=0.999,总共进行200轮训练,前100轮学习率为0.000 1,后100轮学习率按指数衰减至0,训练的批尺寸(batchsize)为1。训练过程中生成器和判别器的收敛情况如图 5图 6所示, 图中, $los{s_{\rm{D}}}$$los{s_{\rm{G}}}$分别表示判别器和生成器的损失函数。

图 5 判别器随迭代次数收敛情况
Fig. 5 Discriminator converges with the iterations
图 6 生成器随迭代次数收敛情况
Fig. 6 Generator converges with the iterations

2.4 实验结果

为了定量验证本文方法的有效性,使用2.2节的评判标准Dice相似性系数和Hausdoff距离作为比较指标,与其他几个基于深度学习的算法进行了定量比较,如表 1所示。

表 1 分割方法定量评价
Table 1 Quantitative evaluation of segmentation methods

下载CSV
方法 $DSC$/% $H$/mm
FCN[5] 84.8 7.0
PSNet[18] 85.0 9.3
U-Net[19] 86.5 6.1
DSCNN[13] 88.5
本文 88.9 5.3

与基于全卷积的U-Net、DSCNN等方法相比,本文方法的$DSC$指标高,$H$指标低,说明本文方法与人工画出的轮廓更接近。在测试阶段,仅需执行生成器而不需要执行判别器,因此能对每幅图像进行快速分割,本文方法与其他方法的分割时间如表 2所示。

表 2 与其他方法的测试时间比较
Table 2 Comparison with other methods of test time

下载CSV
方法 测试时间/s
堆叠ISA[20] 84
反卷积方法[6] 18
U-Net[19] 6
本文 0.6
专业医生 约120

与其他基于FCN的深度学习方法不同,本文方法使用生成对抗网络模型架构,判别器将整个输入图像作为整体进行判断,因此能够学习到前列腺的结构信息。而基于FCN的方法将各个像素独立看待、分类,因此无法学习到这样内在的结构信息。同时多尺度判别器使得生成器可以学习像素的局部特征和全局特征。

部分前列腺MRI图像分割的结果如图 7所示,图 7中的每幅图像均来自不同人的不同切片。前列腺的轮廓、形状差异很大,红色轮廓是人工分割结果,绿色是本文算法分割结果,可以看出本文方法画出的轮廓平滑,锯齿较少,接近人工分割结果。

图 7 本文方法分割结果
Fig. 7 Segmentation results of our method

3 结论

提出一种基于条件的带有多尺度判别器的生成对抗网络前列腺分割算法,将输入尺寸不同、网络结构相同的两个判别器作为最终的判别器,结合特征匹配损失函数,通过构建生成器—判别器模型实现前列腺区域的准确分割。与传统方法相比,本文方法不需要手动设计特征,能够进行端到端训练。与基于全卷积的深度学习方法相比,本文方法有以下优点:

1) 基于FCN的各种前列腺分割算法将每个像素看做是独立的,实行逐像素分割,导致像素之间的信息丢失,而判别器将输入作为整体进行真假判断,避免了上述情况的发生。

2) 通过对抗训练,多尺度判别器引导生成器根据输入的图片生成前列腺掩膜,对抗损失函数学习高阶结构信息。

3) 由于训练完成后,判别器不需要执行推理任务,因此本文方法分割速度较快。实验表明本文方法的分割结果精确,速度快,每幅图像分割时间不到1 s,达到了临床要求。

当然本文方法也存在不足,有时会对非前列腺区域进行分割,这可能是因为网络缺乏可解释性的缘故。在后续工作中将研究如何结合基于条件的生成对抗网络分割方法和基于全卷积网络分割的优点,设计新的、鲁棒性强、能实时进行分割的网络。

参考文献

  • [1] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 770-778.[DOI: 10.1109/CVPR.2016.90]
  • [2] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: ACM, 2012: 1097-1105.
  • [3] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL].[2018-12-01]. https://arXiv.org/pdf/1409.1556.pdf.
  • [4] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 1-9.[DOI: 10.1109/CVPR.2015.7298594]
  • [5] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 3431-3440.[DOI: 10.1109/CVPR.2015.7298965]
  • [6] Zhan S, Liang Z C, Xie D D. Deconvolutional neural network for prostate MRI segmentation[J]. Journal of Image and Graphics, 2017, 22(4): 516–522. [詹曙, 梁植程, 谢栋栋. 前列腺磁共振图像分割的反卷积神经网络方法[J]. 中国图象图形学报, 2017, 22(4): 516–522. ] [DOI:10.11834/jig.20170411]
  • [7] Shi Y G, Cheng K, Liu Z W. Segmentation of hippocampal subfields by using deep learning and support vector machine[J]. Journal of Image and Graphics, 2018, 23(4): 542–551. [时永刚, 程坤, 刘志文. 结合深度学习和支持向量机的海马子区图像分割[J]. 中国图象图形学报, 2018, 23(4): 542–551. ] [DOI:10.11834/jig.170431]
  • [8] Liu Y P, Cai W L, Hong G B, et al. Automatic segmentation of shoulder joint in MRI by using patch-wise and full-image fully convolutional networks[J]. Journal of Image and Graphics, 2018, 23(10): 1558–1570. [刘云鹏, 蔡文立, 洪国斌, 等. 应用图像块和全卷积神经网络的肩关节MRI自动分割[J]. 中国图象图形学报, 2018, 23(10): 1558–1570. ] [DOI:10.11834/jig.180044]
  • [9] Chen L C, Papandreou G, Kokkinos I, et al. DeepLab:semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834–848. [DOI:10.1109/TPAMI.2017.2699184]
  • [10] Ren S Q, He K M, Girshick R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 39(6): 1137–1149. [DOI:10.1109/TPAMI.2016.2577031]
  • [11] Drozdzal M, Chartrand G, Vorontsov E, et al. Learning normalized inputs for iterative estimation in medical image segmentation[J]. Medical Image Analysis, 2018, 44: 1–13. [DOI:10.1016/j.media.2017.11.005]
  • [12] Milletari F, Navab N, Ahmadi S A. V-Net: fully convolutional neural networks for volumetric medical image segmentation[C]//Proceedings of the Fourth International Conference on 3D Vision. Stanford, USA: IEEE, 2016: 565-571.[DOI: 10.1109/3DV.2016.79]
  • [13] Zhu Q K, Du B, Turkbey B, et al. Deeply-supervised CNN for prostate segmentation[C]//Proceedings of 2017 International Joint Conference on Neural Networks. Anchorage, USA: IEEE, 2017: 178-184.[DOI: 10.1109/IJCNN.2017.7965852]
  • [14] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014: 2672-2680.
  • [15] Isola P, Zhu J Y, Zhou T H, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 5967-5976.[DOI: 10.1109/CVPR.2017.632]
  • [16] Mao X D, Li Q, Xie H R, et al. Least squares generative adversarial networks[C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 2813-2821.[DOI: 10.1109/ICCV.2017.304]
  • [17] Salimans T, Goodfellow I, Zaremba W, et al. Improved techniques for training GANs[C]//Proceedings of the 30th Annual Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates, Inc., 2017
  • [18] Tian Z Q, Liu L Z, Zhang Z F, et al. PSNet:prostate segmentation on MRI based on a convolutional neural network[J]. Journal of Medical Imaging, 2018, 5(2): #021208. [DOI:10.1117/1.JMI.5.2.021208]
  • [19] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation[C]//Proceedings of the 18th International Conference Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer, 2015: 234-241.[DOI: 10.1007/978-3-319-24574-4_28]
  • [20] Liao S, Gao Y Z, Oto A, et al. Representation learning: a unified deep learning framework for automatic prostate MR segmentation[C]//Proceedings of the 16th International Conference on Medical Image Computing and Computer-Assisted Intervention. Nagoya, Japan: Springer, 2013: 254-261.[DOI: 10.1007/978-3-642-40763-5_32]