医学3D计算机视觉:研究进展和挑战
Advances and challenges in medical 3D computer vision
- 2020年25卷第10期 页码:2002-2012
收稿:2020-05-31,
修回:2020-7-7,
录用:2020-7-14,
纸质出版:2020-10-16
DOI: 10.11834/jig.200244
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-05-31,
修回:2020-7-7,
录用:2020-7-14,
纸质出版:2020-10-16
移动端阅览
医学影像的诊断是许多临床决策的基础,而医学影像的智能分析是医疗人工智能的重要组成部分。与此同时,随着越来越多3D空间传感器的兴起和普及,3D计算机视觉正变得越发重要。本文关注医学影像分析和3D计算机的交叉领域,即医学3D计算机视觉或医学3D视觉。本文将医学3D计算机视觉系统划分为任务、数据和表征3个层面,并结合最新文献呈现这3个层面的研究进展。在任务层面,介绍医学3D计算机视觉中的分类、分割、检测、配准和成像重建,以及这些任务在临床诊断和医学影像分析中的作用和特点。在数据层面,简要介绍了医学3D数据中最重要的数据模态:包括计算机断层成像(computed tomography,CT)、磁共振成像(magnetic resonance imaging,MRI)、正电子放射断层成像(positron emission tomography,PET)等,以及一些新兴研究提出的其他数据格式。在此基础上,整理了医学3D计算机视觉中重要的研究数据集,并标注其数据模态和主要视觉任务。在表征层面,介绍并讨论了2D网络、3D网络和混合网络在医学3D数据的表征学习上的优缺点。此外,针对医学影像中普遍存在的小数据问题,重点讨论了医学3D数据表征学习中的预训练问题。最后,总结了目前医学3D计算机视觉的研究现状,并指出目前尚待解决的研究挑战、问题和方向。
Medical imaging is an important tool used for medical diagnosis and clinical decision support that enables clinicians to view the internal of human bodies. Medical image analysis
as an important part of healthcare artificial intelligence
provides fast
smart
and accurate decision supports for clinicians and radiologists. 3D computer vision is an emerging research area with the rapid development and popularization of 3D sensors (e.g.
light detection and ranging (LIDAR)
RGB-D cameras) and computer-aided design in game industry and smart manufacturing. In particular
we focus on the interface of medical image analysis and 3D computer vision called medical 3D computer vision. We introduce the research advances and challenges in medical 3D computer vision in three levels
namely
tasks (medical 3D computer vision tasks)
data (data modalities and datasets)
and representation (efficient and effective representation learning for 3D images). First
we introduce classification
segmentation
detection
registration
and reconstruction in medical 3D computer vision at the task level. Classification
such as malignancy stratification and symptom estimation
is an everyday task for clinicians and radiologists. Segmentation denotes assigning each voxel (pixel) a semantic label. Detection refers to localizing key objects from medical images. Segmentation and detection include organ segmentation/detection and lesion segmentation/detection. Registration
that is
calculating the spatial transformation from one image to another
plays an important role in medical imaging scenarios
such as spatially aligning multiple images from serial examination of a follow-up patient. Reconstruction is also a key task in medical imaging that aims at fast and accurate imaging results to reduce patients' costs. Second
we introduce the important data modalities in medical 3D computer vision
such as computed tomography (CT)
magnetic resonance imaging (MRI)
and positron emission tomography (PET)
at the data level. The principle and clinical scenario of each imaging modality are briefly discussed. We then depict a comprehensive list of medical 3D image research datasets that cover classification
segmentation
detection
registration
and reconstruction tasks in CT
MRI
and graphics format (mesh). Third
we discuss the representation learning for medical 3D computer vision. 2D convolutional neural networks
3D convolutional neural networks
and hybrid approaches are the commonly used methods for 3D representation learning. 2D approaches can benefiting from large-scale 2D pretraining
triplanar
and trislice 2D representation for 3D medical images
whereas they are generally weak in capturing large 3D contexts. 3D approaches are natively strong in 3D context. However
few publicly available 3D medical datasets are large and sufficiently diverse for universal 3D pretraining. For hybrid (2D + 3D) approaches
we introduce multistream and multistage approaches. Although they are empirically effective
the intrinsic disadvantages within the 2D/3D parts still exist. To address the small-data issues for medical 3D computer vision
we discuss the pretraining approaches for medical 3D images. Pretraining for 3D convolutional neural network(CNN) with videos is straightforward to implement. However
a significant domain gap is found between medical images and videos. Collecting massive medical datasets for pretraining is theoretically feasible. However
it only results in thousands of 3D medical image cases with tens of medical datasets
which is significantly smaller compared with natural 2D image datasets. Research efforts exploring unsupervised (self-supervised) learning to obtain the pretrained 3D models are reported. Although its results are extremely impressive
the model performance of up-to-date unsupervised learning is incomparable with that of fully supervised learning. The unsupervised representation learning from medical 3D images cannot leverage the power of massive 2D supervised learning datasets. We introduce several techniques for 2D-to-3D transfer learning
including inflated 3D(I3D)
axial-coronal-sagittal(ACS) convolutions
and AlignShift. I3D enables 2D-to-3D transfer learning by inflating 2D convolution kernels into 3D
and ACS convolutions and AlignShift enable that by introducing novel operators that shuffle the features from 3D receptive fields into a 2D manner. Finally
we discuss several research challenges
problems
and directions for medical 3D computer vision. We first determine the anisotropy issue in medical 3D images
which can be a source of domain gap
that is
between thick- and thin-slice data. We then discuss the data privacy and information silos in medical images
which are important factors that lead to small-data issues in medical 3D computer vision. Federated learning is highlighted as a possible solution for information silos. However
numerous problems
such as how to develop efficient systems and algorithms for federated learning
how to deal with adversarial participators in federated learning
and how to deal with unaligned and missing data
are found. We determine the data imbalance and long tail issues in medical 3D computer vision. Efficient and effective learning of representation from the noisy
imbalanced
and long-tailed real-world data can be extremely challenging in practice because of the imbalanced and long-tailed distributions of real-world patients. We mention the automatic machine learning as a future direction of medical 3D computer vision. With end-to-end deep learning
the development and deployment of medical image application is inapplicable. However
excessive engineering staff need to be tuned for a new medical image task
such as design of deep neural networks
choices of data argumentation
how to preform data preprocessing
and how to tune the learning procedure. The tuning of these hyperparameters can be performed with a hand-crafted or intelligent system to reduce the research efforts by numerous researchers and engineers. Thus
medical 3D computer vision is an emerging research area. With increasing large-scale datasets
easy-to-use and reproducible methodology
and innovative tasks
medical 3D computer vision is an exciting research area that can facilitate healthcare into a novel level.
Armato III S G, McLennan G, Bidaut L, McNitt-Gray M F, Meyer C R, Reeves A P, Zhao B S, Aberle D R, Henschke C I, Hoffman E A, Kazerooni E A, MacMahon H, Van Beek E J R, Yankelevitz D, Biancardi A M, Bland P H, Brown M S, Engelmann R M, Laderach G E, Max D, Pais R C, Qing D P Y, Roberts R Y, Smith A R, Starkey A, Batra P, Caligiuri P, Farooqi A, Gladish G W, Jude C M, Munden R F, Petkovska I, Quint L E, Schwartz L H, Sundaram B, Dodd L E, Fenimore C, Gur D, Petrick N, Freymann J, Kirby J, Hughes B, Vande Casteele A, Gupte S, Sallam M, Heath M D, Kuhn M H, Dharaiya E, Burns R, Fryd D S, Salganicoff M, Anand V, Shreter U, Vastagh S, Croft B Y and Clarke L P. 2011. The lung image database consortium (LIDC) and image database resource initiative (IDRI):a completed reference database of lung nodules on CT scans. Medical Physics, 38(2):915-931[DOI:10.1118/1.3528204]
Balakrishnan G, Zhao A, Sabuncu M R, Dalca A V and Guttag J. 2018. An unsupervised learning model for deformable medical image registration//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE: 9252-9260[ DOI: 10.1109/CVPR.2018.00964 http://dx.doi.org/10.1109/CVPR.2018.00964 ]
Bilic P, Christ P F, Vorontsov E, Chlebus G, Chen H, Dou Q, Fu C W, Han X, Heng P A, Hesser J, Kadoury S, Konopczynski T, Le M, Li C M, Li X M, Lipkovà J, Lowengrub J, Meine H, Moltz J H, Pal C, Piraud M, Qi X J, Qi J, Rempfler M, Roth K, Schenk A, Sekuboyina A, Vorontsov E, Zhou P, Hülsemeyer C, Beetz M, Ettlinger F, Gruen F, Kaissis G, Lohöfer F, Braren R, Holch J, Hofmann F, Sommer W, Heinemann V, Jacobs C, Mamani G E H, Van Ginneken B, Chartrand G, Tang A, Drozdzal M, Cohen A B, Klang E, Amitai M M, Konen E, Greenspan H, Moreau J, Hostettler A, Soler L, Vivanti R, Szeskin A, Lev-Cohain N, Sosna J, Joskowicz L and Menze B H. 2019. The liver tumor segmentation benchmark (LITS)[EB/OL].[2020-05-01] . https://arxiv.org/pdf/1901.04056v1.pdf https://arxiv.org/pdf/1901.04056v1.pdf
Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konečny J, Mazzocchi S, McMahan B, Van Overveldt T, Petrou D, Ramage D and Roselander J. 2019. Towards federated learning at scale: system design[EB/OL].[2020-05-01] . https://arxiv.org/pdf/1902.01046.pdf https://arxiv.org/pdf/1902.01046.pdf
Carreira J and Zisserman A. 2017. Quo vadis, action recognition? A new model and the kinetics dataset//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 4724-4733[ DOI: 10.1109/CVPR.2017.502 http://dx.doi.org/10.1109/CVPR.2017.502 ]
Chen S H, Ma K and Zheng Y F. 2019. Med3D: transfer learning for 3D medical image analysis[EB/OL].[2020-05-01] . https://arxiv.org/pdf/1904.00625.pdf https://arxiv.org/pdf/1904.00625.pdf
ÇiçekÖ, Abdulkadir A, Lienkamp S S, Brox T and Ronneberger O. 2016.3D U-Net: learning dense volumetric segmentation from sparse annotation//Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens: Springer: 424-432[ DOI: 10.1007/978-3-319-46723-8_49 http://dx.doi.org/10.1007/978-3-319-46723-8_49 ]
Deng J, Dong W, Socher R, Li L J, Li K and Li F F. 2009. ImageNet: a large-scale hierarchical image database//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE: 248-255[ DOI: 10.1109/CVPR.2009.5206848 http://dx.doi.org/10.1109/CVPR.2009.5206848 ]
Dou Q, Yu L Q, Chen H, Jin Y M, Yang X, Qin J and Heng P A. 2017.3D deeply supervised network for automated segmentation of volumetric medical images. Medical Image Analysis, 41:40-54[DOI:10.1016/j.media.2017.05.001]
Han X. 2017. Automatic liver lesion segmentation using a deep convolutional neural network method[EB/OL].[2020-05-31] . https://arxiv.org/pdf/1704.07239.pdf https://arxiv.org/pdf/1704.07239.pdf
Hanocka R, Hertz A, Fish N, Giryes R, Fleishman S and Cohen-Or D. 2019. MeshCNN:a network with an edge. ACM Transactions on Graphics, 38(4):#90[DOI:10.1145/3306346.3322959]
Hara K, Kataoka H and Satoh Y. 2018. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet?//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE: 6546-6555[ DOI: 10.1109/CVPR.2018.00685 http://dx.doi.org/10.1109/CVPR.2018.00685 ]
Heller N, Sathianathen N, Kalapara A, Walczak E, Moore K, Kaluzniak H, Rosenberg J, Blake P, Rengel Z, Oestreich M, Dean J, Tradewell M, Shah A, Tejpaul R, Edgerton Z, Peterson M, Raza S, Regmi S, Papanikolopoulos N and Weight C. 2019. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes[EB/OL] .[2020-05-01]. https://arxiv.org/pdf/1904.00445.pdf https://arxiv.org/pdf/1904.00445.pdf
Isensee F, Petersen J, Klein A, Zimmerer D, Jaeger P F, Kohl S, Wasserthal J, Koehler G, Norajitra T, Wirkert S and Maier-Hein K H. 2018. nnU-Net: self-adapting framework for U-Net-based medical image segmentation[EB/OL].[2020-05-01] . https://arxiv.org/pdf/1809.10486.pdf https://arxiv.org/pdf/1809.10486.pdf
Jaderberg M, Simonyan K and Zisserman A. 2015. Spatial transformer networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal: ACM: 2017-2025
Jiang Z K, Lyu X G, Zhang J X, Zhang Q and Wei X P. 2020. Review of deep learning methods for MRI brain tumor image segmentation. Journal of Image and Graphics, 25(2):215-228 [DOI:10.11834/jig.190173]
MRI脑肿瘤图像分割的深度学习方法综述.中国图象图形学报, 25(2):215-228)[DOI:10.11834/jig.190173]
Knoll F, Zbontar J, Sriram A, Muckley M J, Bruno M, Defazio A, Parente M, Geras K J, Katsnelson J, Chandarana H, Zhang Z Z, Drozdzalv M, Romero A, Rabbat M, Vincent P, Pinkerton J, Wang D, Yakubova N, Owens E, Zitnick C L, Recht M P, Sodickson D K and Lui Y W. 2020. fastMRI:a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiology:Artificial Intelligence, 2(1):#190007[DOI:10.1148/ryai.2020190007]
Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P andZitnick C L. 2014. Microsoft COCO: common objects in context//Proceedings of the 13th European Conference on Computer Vision. Zurich: Springer: 740-755[ DOI: 10.1007/978-3-319-10602-1_48 http://dx.doi.org/10.1007/978-3-319-10602-1_48 ]
Litjens G, Kooi T, Bejnordi B E, Setio A A A, Ciompi F, Ghafoorian M, Van Der Laak J A W M, Van Ginneken B and Sánchez C I. 2017. A survey on deep learning in medical image analysis. Medical Image Analysis, 42:60-88[DOI:10.1016/j.media.2017.07.005]
Menze B H, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, Lanczi L, Gerstner E, Weber M A, Arbel T, Avants B B, Ayache N, Buendia P, Collins D L, Cordier N, Corso J J, Criminisi A, Das T, Delingette H, Demiralp Ç, Durst C R, Dojat M, Doyle S, Festa J, Forbes F, Geremia E, Glocker P, Golland P, Guo X T, Hamamci A, Iftekharuddin K M, Jena R, John N M, Konukoglu E, Lashkari D, Mariz J A, Meier R, Pereira S, Precup D, Price S J, Raviv T R, Reza S M S, Ryan M, Sarikaya D, Schwartz L, Shin H C, Shotton J, Silva C A, Sousa N, Subbanna N K, Szekely G, Taylor T J, Thomas O M, Tustison N J, Unal G, Vasseur F, Wintermark M, Ye D H, Zhao L, Zhao B S, Zikic D, Prastawa M, Reyes M and Van Leemput K. 2015. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 34(10):1993-2024[DOI:10.1109/TMI.2014.2377694]
Petersen R C, Aisen P S, Beckett L A, Donohue M C, Gamst A C, Harvey D J, Jack C R, Jagust W J, Shaw L M, Toga A W, Trojanowski J Q and Weiner M W. 2010. Alzheimer's disease neuroimaging initiative (ADNI):clinical characterization. Neurology, 74(3):201-209[DOI:10.1212/WNL.0b013e3181cb3e25]
Prasoon A, Petersen K, Igel C, Lauze F, Dam E and Nielsen M. 2013. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network//Proceedings of the 16th International Conference on Medical Image Computing and Computer-Assisted Intervention. Nagoya: Springer: 246-253[ DOI: 10.1007/978-3-642-40763-5_31 http://dx.doi.org/10.1007/978-3-642-40763-5_31 ]
Qi Charles R, Su H, Mo K C and Guibas L J. 2017. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 77-85[ DOI: 10.1109/CVPR.2017.16 http://dx.doi.org/10.1109/CVPR.2017.16 ]
RibFrac Team. 2020. MICCAI 2020 RibFrac challenge: rib fracture detection and classification[EB/OL].[2020-05-01] . https://ribfrac.grand-challenge.org https://ribfrac.grand-challenge.org
Setio A A A, Traverso A, De Bel T, Berens M S N, Van Den Bogaard C, Cerello P, Chen H, Dou Q, Fantacci M E, Geurts B, Van Der Gugten R, Heng P A, Jansen B, De Kaste M M J, Kotov V, Lin J Y H, Manders J T M C, Sóñora-Mengana A, García-Naranjo J C, Papavasileiou E, Prokop M, Saletta M, Schaefer-Prokop C M, Scholten E T, Scholten L, Snoeren M M, Torres E L, Vandemeulebroucke J, Walasek N, Zuidhof G C A, Van Ginneken B and Jacobs C. 2017. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images:the LUNA16 challenge. Medical Image Analysis, 42:1-13[DOI:10.1016/j.media.2017.06.015]
Shen D G, Wu G R and Suk H I. 2017. Deep learning in medical image analysis. Annual Review of Biomedical Engineering, 19:221-248[DOI:10.1146/annurev-bioeng-071516-044442]
Simpson A L, Antonelli M, Bakas S, Bilello M, Farahani K, Van Ginneken B, Kopp-Schneider A, Landman B A, Litjens G, Menze B, Ronneberger O, Summers R M, Bilic P, Christ P F, Do R K G, Gollub M, Golia-Pernicka J, Heckers S H, Jarnagin W R, McHugo M K, Napel S, Vorontsov E, Maier-Hein L and Cardoso M J. 2019. A large annotated medical image dataset for the development and evaluation of segmentation algorithms[EB/OL].[2020-05-01] . https://arxiv.org/pdf/1902.09063.pdf https://arxiv.org/pdf/1902.09063.pdf
Tang H, Chen X M, Liu Y, Lu Z P, You J H, Yang M Z, Yao S Y, Zhao G Q, Xu Y, Chen T F, Liu Y and Xie X H. 2019. Clinically applicable deep learning framework for organs at risk delineation in CT images. Nature Machine Intelligence, 1(10):480-491[DOI:10.1038/s42256-019-0099-z]
The National Lung Screening Trial Research Team. 2011. Reduced lung-cancer mortality with low-dose computed tomographic screening. New England Journal of Medicine, 365(5):395-409[DOI:10.1056/NEJMoa1102873]
Wang G, Ye J C, Mueller K and Fessler J A. 2018. Image reconstruction is a new frontier of machine learning. IEEE Transactions on Medical Imaging, 37(6):1289-1296[DOI:10.1109/TMI.2018.2833635]
Wu Z R, Song S R, Khosla A, Yu F, Zhang L G, Tang X O and Xiao J X. 2015.3D ShapeNets: a deep representation for volumetric shapes//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE: 1912-1920[ DOI: 10.1109/CVPR.2015.7298801 http://dx.doi.org/10.1109/CVPR.2015.7298801 ]
Xia Y D, Xie L X, Liu F Z, Zhu Z T, Fishman E K and Yuille A L. 2018. Bridging the gap between 2D and 3D organ segmentation with volumetric fusion net//Proceedings of the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention. Granada: Springer: 445-453[ DOI: 10.1007/978-3-030-00937-3_51 http://dx.doi.org/10.1007/978-3-030-00937-3_51 ]
Yan K, Peng Y F, Sandfort V, Bagheri M, Lu Z Y and Summers R M. 2019. Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE: 8515-8524[ DOI: 10.1109/CVPR.2019.00872 http://dx.doi.org/10.1109/CVPR.2019.00872 ]
Yan K, Wang X S, Lu L, Zhang L, Harrison A P, Bagheri M and Summers R M. 2018. Deep lesion graphs in the wild: relationship learning and organization of significant radiology image findings in a diverse large-scale lesion database//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE: 9261-9270[ DOI: 10.1109/CVPR.2018.00965 http://dx.doi.org/10.1109/CVPR.2018.00965 ]
Yang J C, He Y, Huang X Y, Xu J W, Ye X D,Tao G Y and Ni B B. 2020a. AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes[EB/OL].[2020-05-01] . https://arxiv.org/pdf/2005.01969/pdf https://arxiv.org/pdf/2005.01969/pdf
Yang J C, Huang X Y, Ni B B, Xu J W, Yang C Q and Xu G Z. 2019. Reinventing 2D convolutions for 3D images[EB/OL].[2020-05-01] . https://arxiv.org/pdf/1911.10477/pdf https://arxiv.org/pdf/1911.10477/pdf
Yang X, Xia D, Kin T and Igarashi T. 2020b. IntrA: 3D intracranial aneurysm dataset for deep learning[EB/OL].[2020-05-01] . https://arxiv.org/pdf/2003.02920v1.pdf https://arxiv.org/pdf/2003.02920v1.pdf
Zhao W, Yang J C, Ni B B, Bi D X, Sun Y L, Xu M D, Zhu X X, Li C, Jin L, Gao P, Wang P J, Hua Y Q and Li M. 2019. Toward automatic prediction of EGFR mutation status in pulmonary adenocarcinoma with 3D deep learning. Cancer Medicine, 8(7):3532-3543[DOI:10.1002/cam4.2233]
Zhao W, Yang J C, Sun Y L, Li C, Wu W L, Jin L, Yang Z M, Ni B B, Gao P, Wang P J, Hua Y Q and Li M. 2018.3D deep learning from CT scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Research, 78(24):6881-6889[DOI:10.1158/0008-5472.CAN-18-0696]
Zheng H, Zhang Y Z, Yang L, Liang P X, Zhao Z, Wang C L and Chen D Z. 2019. A new ensemble learning framework for 3D biomedical image segmentation//Proceedings of 2019 AAAI Conference on Artificial Intelligence. Honolulu: AAAI: 5909-5916[ DOI: 10.1609/aaai.v33i01.33015909 http://dx.doi.org/10.1609/aaai.v33i01.33015909 ]
Zhou B L, Zhao H, Puig X, Fidler S, Barriuso A and Torralba A. 2017. Scene parsing through ADE20K dataset//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 5122-5130[ DOI: 10.1109/CVPR.2017.544 http://dx.doi.org/10.1109/CVPR.2017.544 ]
Zhou Z W, Siddiquee M M R, Tajbakhsh N and Liang J M. 2018. UNet++: a nested U-Net architecture for medical image segmentation//Proceedings of the 4th Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada: Springer: 3-11[ DOI: 10.1007/978-3-030-00889-5_1 http://dx.doi.org/10.1007/978-3-030-00889-5_1 ]
Zhou Z W, Sodha V, Siddiquee M M R, Feng R B, Tajbakhsh N, Gotway M B and Liang J M. 2019. Models genesis: generic autodidactic models for 3D medical image analysis//Proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention. Shenzhen: Springer: 384-393[ DOI: 10.1007/978-3-030-32251-9_42 http://dx.doi.org/10.1007/978-3-030-32251-9_42 ]
相关作者
相关机构
京公网安备11010802024621