A review of generative adversarial networks and the application in medical image
- Vol. 27, Issue 3, Pages: 687-703(2022)
Published: 16 March 2022 ,
Accepted: 02 June 2021
DOI: 10.11834/jig.210247
移动端阅览
浏览全部资源
扫码关注微信
Published: 16 March 2022 ,
Accepted: 02 June 2021
移动端阅览
Yinglin Zhang, Yan Hu, Higashita Risa, Jiang Liu. A review of generative adversarial networks and the application in medical image. [J]. Journal of Image and Graphics 27(3):687-703(2022)
生成对抗式网络(generative adversarial network,GAN)由负责学习数据分布的生成器和负责鉴别样本真伪的判别器构成,二者在相互对抗过程中互相学习逐渐变强。该网络模型使深度学习方法可以自动学习损失函数,减少了对专家知识的依赖,已经广泛应用于自然图像处理领域,对解决医学影像处理的相关瓶颈问题亦具有巨大应用前景。本文旨在找到生成对抗式网络与医学影像领域面临挑战的结合点,通过分析已有工作对未来研究方向进行展望,为该领域研究提供参考。1)阐述了生成对抗式网络的基本原理,从任务拆分、条件约束以及图像到图像的翻译等角度对其衍生模型进行分析回顾;2)对生成对抗式网络在医学影像领域中的数据增广、模态迁移、图像分割以及去噪等方面的应用进行回顾,分析各方法的优缺点与适用范围;3)对现有图像生成质量评估方法进行小结;4)总结生成对抗式网络在医学影像领域的研究进展,并结合该领域问题特性,指出现有理论应用存在的不足与改进方向。生成对抗式网络提出以来,理论不断完善,在医学影像的处理应用中也取得了长足发展,但仍然存在一些亟待解决的问题,包括3维数据合成、几何结构合理性保持、无标记和未配对数据使用以及多模态数据交叉应用等。
The generative adversarial network (GAN) consists of a generator based on the data distribution learning and an identified sample's authenticity discriminator. They learn from each other gradually in the process of confrontation. The network enables the deep learning method to learn the loss function automatically and reduces expertise dependence. It has been widely used in natural image processing
and it is also a promising solution for related problems in medical image processing field. This paper aims to bridge the gap between GAN and specific medical field problems and point out the future improvement directions. First
the basic principle of GAN is issued. Secondly
we review the latest medical images research on data augmentation
modality migration
image segmentation
and denoising; analyze the advantage and disadvantage of each method and the scope of application. Next
the current quality assessment is summarized. At the end
the research development
issue
and future improvement direction of GAN on medical image are summarized. GAN theoretical study focus on three aspects of task splitting
introducing conditional constraints and image-to-image translation
which effectively improved the quality of the synthesized image
increased the resolution
and allowed more manipulation across the image synthesis process. However
there are some challenges as mentioned below: 1) Generate high-quality
high-resolution
and diverse images under large-scale complex data sets. 2) The manipulation of synthesized image attributes at different levels and different granularities. 3) The lack of paired training data and the guarantee of image translation quality and diversity. GAN application study in data augmentation
modality migration
image segmentation
and denoising of medical images has been widely analyzed. 1) The network model based on the Pix2pix basic framework can synthesize additional high-quality and high-resolution samples and improve the segmentation and classification performance based on data augmentation effectively. However
there are still problems such as insufficient synthetic sample diversity
basic biological structures maintenance difficulty
and limited 3D image synthesis capabilities. 2) The network model based on the CycleGAN basic framework does not require paired training images. It has been extensively analyzed in modality migration
but may lose the basic structure information. The current research on structure preservation in modality migration limits in the fusion of information
such as edges and segmentation. 3) Both the generator and the discriminator can be fused with the current segmentation model to improve the performance of the segmentation model. The generator can synthesize additional data
and the discriminator can guide model training from a high-level semantic level and make full use of unlabeled data. However
current research mainly focuses on single-modality image segmentation. 4) GAN application in image denoising can reconstruct normal-dose images from low-dose images
reducing the radiation impact suffered by patients. The critical issues of GAN in medical image processing are presented as follows: 1) Most medical image data is three-dimensional
such as MRI (magnetic resonance imaging) and CT (computed tomography)
etc. The improvement of the synthesis quality and resolution of the three-dimensional data is a critical issue. 2) The difficulty in ensuring the diversity of synthesized data while keeping its basic geometric structure's rationality. 3) The question on how to make full use of unlabeled and unpaired data to generate high-quality
high-resolution
and diverse images. 4) The improvement of algorithms' cross-modality generalization performance
and the effective migration of different modality data. Future research should focus on the issues as following: 1) To optimize network architecture
objective function
and training methods for 3D data synthesis
improving model training stability
quality
resolution
and diversity of 3D synthesized images. 2) To further promote the prior geometric knowledge integration with GAN. 3) To take full advantage of the GAN's weak supervision characteristics. 4) To extract invariant features via attribute decoupling for good generalization performance and achieve attribute control at different levels
granularities
and needs in the process of modality migration. To conclude
ever since GAN was proposed
its theory has been continuously improved. A considerable evolution in medical image applications has been sorted out
such as data augmentation
modality migration
image segmentation
and denoising. Some challenging issues are still waiting to be resolved
including three-dimensional data synthesis
geometric structure rationality maintenance
unlabeled and unpaired data usage
and multi-modality data application.
生成对抗式网络(GAN)医学影像深度学习数据增广模态迁移图像分割图像去噪
generative adversarial network (GAN)medical imagedeep learningdata augmentationmodality migrationimage segmentationimage denoising
Alharbi Y and Wonka P. 2020. Disentangled image generation through structured noise injection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 5133-5141 [DOI: 10.1109/CVPR42600.2020.00518http://dx.doi.org/10.1109/CVPR42600.2020.00518]
Almahairi A, Rajeshwar S, Sordoni A, Bachman P and Courville A. 2018. Augmented CycleGAN: learning many-to-many mappings from unpaired data//Proceedings of the 35th International Conference on Machine Learning. Stockholmsmässan, Sweden: PMLR: 195-204
Badrinarayanan V, Handa A and Cipolla R. 2015. SegNet: a deep convolutional encoder-decoder architecture for robust semantic pixel-wise labeling [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1505.07293.pdfhttps://arxiv.org/pdf/1505.07293.pdf
Bahdanau D, Cho K and Bengio Y. 2016. Neural machine translation by jointly learning to align and translate [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1409.0473.pdfhttps://arxiv.org/pdf/1409.0473.pdf
Barratt S and Sharma R. 2018. A note on the inception score [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1801.01973.pdfhttps://arxiv.org/pdf/1801.01973.pdf
Bissoto A, Perez F, Valle E and Avila S. 2018. Skin lesion synthesis with generative adversarial networks//OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis. Switzerland: Springer: 294-302 [DOI: 10.1007/978-3-030-01201-4_32http://dx.doi.org/10.1007/978-3-030-01201-4_32]
Brock A, Donahue J and Simonyan K. 2019. Large scale GAN training for high fidelity natural image synthesis [EB/OL]. [2021-08-07]https://arxiv.org/pdf/1809.11096v1.pdfhttps://arxiv.org/pdf/1809.11096v1.pdf
Cao Y J, Jia L L, Chen Y X, Lin N and Li X X. 2018. Review of computer vision based on generative adversarial networks. Journal of Image and Graphics, 23(10): 1433-1449
曹仰杰, 贾丽丽, 陈永霞, 林楠, 李学相. 2018. 生成式对抗网络及其计算机视觉应用研究综述. 中国图象图形学报, 23(10): 1433-1449[DOI: 10.11834/jig.180103]
Chen C, Dou Q, Chen H and Heng P A. 2018. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest X-ray segmentation//Proceedings of the 9th International Workshop on Machine Learning in Medical Imaging. Granada, Spain: Springer: 143-151 [DOI: 10.1007/978-3-030-00919-9_17http://dx.doi.org/10.1007/978-3-030-00919-9_17]
Chen F J, Zhu F, Wu Q X, Hao Y M, Wang E D and Cui Y G. 2021. A survey about image generation with generative adversarial nets. Chinese Journal of Computers, 44(2): 347-369
陈佛计, 朱枫, 吴清潇, 郝颖明, 王恩德, 崔芸阁. 2021. 生成对抗网络及其在图像生成中的应用研究综述. 计算机学报, 44(2): 347-369
Costa P, Galdran A, Meyer M I, Niemeijer M, Abràmoff M, Mendonça A M and Campilho A. 2018. End-to-end adversarial retinal image synthesis. IEEE Transactions on Medical Imaging, 37(3): 781-791 [DOI: 10.1109/TMI.2017.2759102]
Denton E, Chintala S, Szlam A and Fergus R. 2015. Deep generative image models using a Laplacian pyramid of adversarial networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 1486-1494
Dysmorphology Subcommittee of the Clinical Practice Committee and American College of Medical Genetics. 2000. Informed consent for medical photographs. Genetics in Medicine, 2(6): 353-355 [DOI: 10.1097/00125817-200011000-00010]
Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J and Greenspan H. 2018. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing, 321: 321-331 [DOI: 10.1016/j.neucom.2018.09.013]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozairy S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 2672-2680
Greenspan H, Van Ginneken B and Summers R M. 2016. Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Transactions on Medical Imaging, 35(5): 1153-1159 [DOI: 10.1109/TMI.2016.2553401]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Heusel M, Ramsauer H, Unterthiner T, Nessler B and Hochreiter S. 2017. GANs trained by a two time-scale update rule converge to a local nash equilibrium//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc. : 6629-6640
Hiasa Y, Otake Y, Takao M, Matsuoka T, Takashima K, Prince J L, Sugano N and Sato Y. 2018. Cross-modality image synthesis from unpaired data using CycleGAN: effects of gradient consistency loss and training data size//Proceedings of the 3rd International Workshop on Simulation and Synthesis in Medical Imaging. Granada, Spain: Springer: 31-41 [DOI: 10.1007/978-3-030-00536-8_4http://dx.doi.org/10.1007/978-3-030-00536-8_4]
Huang G, Liu Z, van der Maaten L and Weinberger K Q. 2016. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE: 2261-2269 [DOI: 10.1109/CVPR.2017.243http://dx.doi.org/10.1109/CVPR.2017.243]
Isola P, Zhu J Y, Zhou T H and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 5967-5976 [DOI: 10.1109/CVPR.2017.632http://dx.doi.org/10.1109/CVPR.2017.632]
Jin D K, Xu Z Y, Tang Y B, Harrison A P and Mollura D J. 2018. CT-realistic lung nodule simulation from 3D conditional generative adversarial networks for robust lung segmentation//Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention. Granada, Spain: Springer: 732-740 [DOI: 10.1007/978-3-030-00934-2_81http://dx.doi.org/10.1007/978-3-030-00934-2_81]
Karras T, Aila T, Laine S and Lehtinen J. 2018. Progressive growing of GANs for improved quality, stability, and variation [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1710.10196.pdfhttps://arxiv.org/pdf/1710.10196.pdf
Karras T, Laine S and Aila T. 2019. A style-based generator architecture for generative adversarial networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4396-4405 [DOI: 10.1109/CVPR.2019.00453http://dx.doi.org/10.1109/CVPR.2019.00453]
Krizhevsky A, Sutskever I and Hinton G E. 2012. ImageNet classification with deep convolutional neural networks//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: Curran Associates Inc. : 1097-1105
Kwon G, Han C and Kim D S. 2019. Generation of 3D brain MRI using auto-encoding generative adversarial networks//Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention. Shenzhen, China: Springer: 118-126 [DOI: 10.1007/978-3-030-32248-9_14http://dx.doi.org/10.1007/978-3-030-32248-9_14]
Lei B Y, Xia Z M, Jiang F, Jiang X D, Ge Z Y, Xu Y W, Qin J, Chen S P, Wang T F and Wang S Q. 2020. Skin lesion segmentation via generative adversarial networks with dual discriminators. Medical Image Analysis, 64: #101716 [DOI: 10.1016/j.media.2020.101716]
Li H, Hu Y, Li S, Lin W, Lin P, Higashita R and Liu J. 2020. CT scan synthesis for promoting computer-aided diagnosis capacity of COVID-19//Proceedings of International Conference on Intelligent Computing. Springer, Cham, 2020: 413-422
Li R D, Pan J S, Li Z C and Tang J H. 2018. Single image dehazing via conditional generative adversarial network//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8202-8211 [DOI: 10.1109/CVPR.2018.00856http://dx.doi.org/10.1109/CVPR.2018.00856]
Liu M Y, Huang X, Mallya A, Karras T, Aila T, Lehtinen J and Kautz J. 2019. Few-shot unsupervised image-to-image translation//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 10550-10559 [DOI: 10.1109/ICCV.2019.01065http://dx.doi.org/10.1109/ICCV.2019.01065]
Liu Z C, Bicer T, Kettimuthu R, Gürsoy D, de Carlo F and Foster I. 2020. TomoGAN: low-dose synchrotron X-ray tomography with generative adversarial networks: discussion. Journal of the Optical Society of America A, 37(3): 422-434 [DOI: 10.1364/JOSAA.375595]
Mejjati Y A, Richardt C, Tompkin J, Cosker D and Kim K I. 2018. Unsupervised attention-guided image to image translation [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1806.02311.pdfhttps://arxiv.org/pdf/1806.02311.pdf
Mirza M and Osindero S. 2014. Conditional generative adversarial nets [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1411.1784v1.pdfhttps://arxiv.org/pdf/1411.1784v1.pdf
Nguyen-Phuoc T, Li C, Theis L, Richardt C and Yang Y L. 2019. HoloGAN: unsupervised learning of 3D representations from natural images//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea (South): IEEE: 2037-2040 [DOI: 10.1109/ICCVW.2019.00255http://dx.doi.org/10.1109/ICCVW.2019.00255]
Odena A, Olah C and Shlens J. 2017. Conditional image synthesis with auxiliary classifier GANs [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1610.09585v4.pdfhttps://arxiv.org/pdf/1610.09585v4.pdf
Office for Civil Rights (OCR). 2022. Your rights under HIPAA [EB/OL]. US Department of Health and Human Services. [2022-02-24].https://www.hhs.gov/hipaa/for-individuals/guidance-materials-for-consumers/index.htmlhttps://www.hhs.gov/hipaa/for-individuals/guidance-materials-for-consumers/index.html
Olszewski K, Ceylan D, Xing J, Echevarria J, Chen Z L, Chen W K and Li H. 2020. Intuitive, interactive beard and hair synthesis with generative models//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 7444-7454 [DOI: 10.1109/CVPR42600.2020.00747http://dx.doi.org/10.1109/CVPR42600.2020.00747]
Perarnau G, van de Weijer J, Raducanu B andÁlvarez J M. 2016. Invertible Conditional GANs for image editing [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1611.06355.pdfhttps://arxiv.org/pdf/1611.06355.pdf
Rabin J, PeyréG, Delon J and Bernot M. 2012. Wasserstein barycenter and its application to texture mixing// Proceedings of the 3rd International Conference on Scale Space and Variational Methods in Computer Vision.Ein-Gedi, Israel: Springer: 435-446 [DOI: 10.1007/978-3-642-24785-9_37http://dx.doi.org/10.1007/978-3-642-24785-9_37]
Ran M S, Hu J R, Chen Y, Chen H, Sun H Q, Zhou J L and Zhang Y. 2019. Denoising of 3D magnetic resonance images using a residual encoder-decoder Wasserstein generative adversarial network. Medical Image Analysis, 55: 165-180 [DOI: 10.1016/j.media.2019.05.001]
Ravì D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B and Yang G Z. 2017. Deep learning for health informatics. IEEE Journal of Biomedical and Health Informatics, 21(1): 4-21 [DOI: 10.1109/JBHI.2016.2636665]
Ruan Y, Li D W, Marshall H, Miao T, Cossetto T, Chan I, Daher O, Accorsi F, Goela A and Li S. 2020. MB-FSGAN: joint segmentation and quantification of kidney tumor on CT by the multi-branch feature sharing generative adversarial network. Medical Image Analysis, 64: #101721 [DOI: 10.1016/j.media.2020.101721]
Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A and Chen X. 2016. Improved techniques for training GANs//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc. : 2234-2242
Schmitt J M, Xiang SH and Yung K M. 1999. Speckle in optical coherence tomography. Journal of Biomedical Optics, 4(1): 95-105 [DOI: 10.1117/1.429925]
Shelhamer E, Long J and Darrell T. 2017. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4): 640-651 [DOI: 10.1109/TPAMI.2016.2572683]
Shi J, Wang L L, Wang S S, Chen Y X, Wang Q, Wei D M, Liang S J, Peng J L, Yi J J, Liu S F, Ni D, Wang M L, Zhang D Q and Shen D G. 2020. Applications of deep learning in medical imaging: a survey. Journal of Image and Graphics, 25(10): 1953-1981
施俊, 汪琳琳, 王珊珊, 陈艳霞, 王乾, 魏冬铭, 梁淑君, 彭佳林, 易佳锦, 刘盛锋, 倪东, 王明亮, 张道强, 沈定刚. 2020. 深度学习在医学影像中的应用综述. 中国图象图形学报, 25(10): 1953-1981[DOI: 10.11834/jig.200255]
Shmelkov K, Schmid C and Alahari K. 2018. How good is my GAN?//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 218-234 [DOI: 10.1007/978-3-030-01216-8_14http://dx.doi.org/10.1007/978-3-030-01216-8_14]
Sutskever I, Vinyals O and Le Q V. 2014. Sequence to sequence learning with neural networks//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 3104-3112
Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V and Rabinovich A. 2014. Going deeper with convolutions//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE: 1-9 [DOI: 10.1109/CVPR.2015.7298594http://dx.doi.org/10.1109/CVPR.2015.7298594]
Tan J X, Jing L L, Huo Y M, Li L H, Akin O and Tian Y L. 2021. LGAN: lung segmentation in CT scans using generative adversarial network. Computerized Medical Imaging and Graphics, 87: #101817
Theis L, van den Oord A and Bethge M. 2016. A note on the evaluation of generative models [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1511.01844.pdfhttps://arxiv.org/pdf/1511.01844.pdf
Wan C H, Chuang S P and Lee H Y. 2019. Towards audio to scene image synthesis using generative adversarial network//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. Brighton, UK: IEEE: 496-500 [DOI: 10.1109/ICASSP.2019.8682383http://dx.doi.org/10.1109/ICASSP.2019.8682383]
Wang Q, Fan H J, Sun G, Ren W H and Tang Y D. 2020. Recurrent generative adversarial network for face completion. IEEE Transactions on Multimedia, 23: 429-442 [DOI: 10.1109/TMM.2020.2978633]
Wang T C, Liu M Y, Zhu J Y, Tao A, Kautz J and Catanzaro B. 2018. High-resolution image synthesis and semantic manipulation with conditional GANs//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 8798-8807[DOI: 10.1109/CVPR.2018.00917http://dx.doi.org/10.1109/CVPR.2018.00917]
Wang W Y, Huang Q G, You S Y, Yang C and Neumann U. 2017. Shape inpainting using 3D generative adversarial network and recurrent convolutional networks//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 2317-2325 [DOI: 10.1109/ICCV.2017.252http://dx.doi.org/10.1109/ICCV.2017.252]
Wolterink J M, Leiner T, Viergever M A and Išgum I. 2017. Generative adversarial networks for noise reduction in low-dose CT. IEEE Transactions on Medical Imaging, 36(12): 2536-2545 [DOI: 10.1109/TMI.2017.2708987]
Wu H S, Lu X H, Lei B Y and Wen Z K. 2021. Automated left ventricular segmentation from cardiac magnetic resonance images via adversarial learning with multi-stage pose estimation network and co-discriminator. Medical Image Analysis, 68: #101891 [DOI: 10.1016/j.media.2020.101891]
Wu Y, Donahue J, Balduzzi D, Simonyan K and Lillicrap T. 2020. LOGAN: latent optimisation for generative adversarial networks [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1912.00953.pdfhttps://arxiv.org/pdf/1912.00953.pdf
Xiang S H, Zhou L and Schmitt J M. 1998. Speckle noise reduction for optical coherence tomography//Proceedings Volume 3196, Optical and Imaging Techniques for Biomonitoring III. San Remo, Italy: SPIE: 79-88 [DOI: 10.1117/12.297921http://dx.doi.org/10.1117/12.297921]
Xie X P, Chen J W, Li Y X, Shen L L, Ma K and Zheng Y F. 2020. MI2GAN: generative adversarial network for medical image domain adaptation using mutual information constraint//Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention. Lima, Peru: Springer: 516-525 [DOI: 10.1007/978-3-030-59713-9_50http://dx.doi.org/10.1007/978-3-030-59713-9_50].
Yang X, Lin Y, Wang Z W, Li X and Cheng K T. 2020. Bi-modality medical image synthesis using semi-supervised sequential generative adversarial networks. IEEE Journal of Biomedical and Health Informatics, 24(3): 855-865 [DOI: 10.1109/JBHI.2019.2922986]
You C Y, Li G, Zhang Y, Zhang X L, Shan H M, Li M Z, Ju S H, Zhao Z, Zhang Z Y, Cong W X, Vannier M W, Saha P K, Hoffman E A and Wang G. 2020. CT super-resolution GAN constrained by the identical, residual, and cycle learning ensemble (GAN-CIRCLE). IEEE Transactions on Medical Imaging, 39(1): 188-203 [DOI: 10.1109/TMI.2019.2922960]
You C Y, Yang Q S, Shan H M, Gjesteby L, Li G, Ju S H, Zhang Z Y, Zhao Z, Zhang Y, Cong W X and Wang G. 2018. Structurally-sensitive multi-scale deep neural network for low-dose CT denoising. IEEE Access, 6: 41839-41855 [DOI: 10.1109/ACCESS.2018.2858196]
Yu B T, Zhou L P, Wang L, Shi Y H, Fripp J and Bourgeat P. 2019. Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Transactions on Medical Imaging, 38(7): 1750-1762 [DOI: 10.1109/TMI.2019.2895894]
Yuan W G, Wei J, Wang J B, Ma Q L and Tasdizen T. 2020. Unified generative adversarial networks for multimodal segmentation from unpaired 3D medical images. Medical Image Analysis, 64: #101731 [DOI: 10.1016/j.media.2020.101731]
Zhang H, Goodfellow I, Metaxas D and Odena A. 2019a. Self-attention generative adversarial networks [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1805.08318.pdfhttps://arxiv.org/pdf/1805.08318.pdf
Zhang H, Wang Y P, Geng X J and Fu P F. 2020. Review of deep learning methods for isointense infant brain MR image segmentation. Journal of Image and Graphics, 25(10): 2068-2078
张航, 王雅萍, 耿秀娟, 付鹏飞. 2020. 等强度婴儿脑MR图像分割的深度学习方法综述. 中国图象图形学报, 25(10): 2068-2078[DOI: 10.11834/jig.200285]
Zhang T Y, Cheng J, Fu H Z, Gu Z W, Xiao Y T, Zhou K, Gao S H, Zheng R and Liu J. 2020. Noise adaptation generative adversarial network for medical imageanalysis. IEEE Transactions on Medical Imaging, 39(4): 1149-1159 [DOI: 10.1109/TMI.2019.2944488]
Zhang T Y, Fu H Z, Zhao Y T, Cheng J, Guo M J, Gu Z W, Yang B, Xiao Y T, Gao S H and Liu J. 2019b. SkrGAN: sketching-rendering unconditional generative adversarial networks for medical image synthesis//Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Shenzhen, China: Springer: 777-785 [DOI: 10.1007/978-3-030-32251-9_85http://dx.doi.org/10.1007/978-3-030-32251-9_85]
Zhang Y, Miao S, Mansi T and Liao R. 2018a. Task driven generative modeling for unsupervised domain adaptation: application to X-ray image segmentation//Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention. Granada, Spain: Springer: 599-607 [DOI: 10.1007/978-3-030-00934-2_67http://dx.doi.org/10.1007/978-3-030-00934-2_67]
Zhang Z Z, Yang L and Zheng Y F. 2018b. Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE: 9242-9251 [DOI: 10.1109/CVPR.2018.00963http://dx.doi.org/10.1109/CVPR.2018.00963]
Zhao L J, Bai H H, Liang J, Zeng B, Wang A H and Zhao Y. 2019. Simultaneous color-depth super-resolution with conditional generative adversarial networks. Pattern Recognition, 88: 356-369 [DOI: 10.1016/j.patcog.2018.11.028]
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 2242-2251 [DOI: 10.1109/ICCV.2017.244http://dx.doi.org/10.1109/ICCV.2017.244]
Zhu J Y, Zhang R, Pathak D, Darrell T, Efros A A, Wang O and Shechtman E. 2018. Toward multimodal image-to-image translation [EB/OL]. [2021-08-07].https://arxiv.org/pdf/1711.11586.pdfhttps://arxiv.org/pdf/1711.11586.pdf
相关文章
相关作者
相关机构