Print

发布时间: 2020-10-16
摘要点击次数:
全文下载次数:
DOI: 10.11834/jig.200291
2020 | Volume 25 | Number 10




    综述    




  <<上一篇 




  下一篇>> 





机器学习在术中光学成像技术中的应用研究
expand article info 张崇1,2,3, 王坤1,2,3, 田捷1,2,3,4
1. 中国科学院自动化研究所分子影像重点实验室, 北京 100190;
2. 中国科学院大学人工智能学院, 北京 100049;
3. 北京市分子影像重点实验室, 北京 100190;
4. 北京航空航天大学大数据精密医学高级创新中心, 北京 100083

摘要

术中光学成像技术的兴起为临床手术提供了更加便捷和直观的观察手段。传统的术中光学成像方法包括开放式光学成像和术中腔镜、内镜成像等,这些方法保障了临床手术的顺利进行,同时也促进了微创手术的发展。随后发展起来的术中光学成像技术还有窄带腔镜成像、术中激光共聚焦显微成像和近红外激发荧光成像等。术中光学成像技术可以辅助医生精准定位肿瘤、快速区分良恶性组织和检测微小病灶等,在诸多临床应用领域表现出了较好的应用效果。但术中光学成像技术也存在成像质量受限、缺乏有力的成像分析工具,以及只能成像表浅组织的问题。机器学习的加入,有望突破瓶颈,进一步推动术中光学成像技术的发展。本文针对术中光学成像技术,对机器学习在这一领域的应用研究展开调研,具体包括:机器学习对术中光学成像质量的优化、辅助术中光学成像的智能分析,以及辅助基于术中光学影像的3维建模等内容。本文对机器学习在术中光学成像领域的应用进行总结和分析,特别叙述了深度学习方法在该领域的应用前景,为后续的研究提供更宽泛的思路。

关键词

术中光学成像技术; 机器学习; 成像优化; 成像智能分析; 3维建模

Review: the application of machine learning in intraoperative optical imaging technologies
expand article info Zhang Chong1,2,3, Wang Kun1,2,3, Tian Jie1,2,3,4
1. Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;
2. School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China;
3. Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China;
4. Beihang University Advanced Innovation Center for Big Data-Based Precision Medicine, Beijing 100083, China

Abstract

The rise of intraoperative optical imaging technologies provides a convenient and intuitive observation method for clinical surgery. Traditional intraoperative optical imaging methods include open optical and intraoperative endoscopic imaging. These methods ensure the smooth implementation of clinical surgery and promote the development of minimally invasive surgery. Subsequent methods include narrow-band endoscopic, intraoperative laser confocal microscopy, and near-infrared excited fluorescence imaging. Narrow-band endoscopic imaging uses a filter to filter out the broad-band spectrum emitted by the endoscope light source, leaving only the narrow-band spectrum for the diagnosis of various diseases of the digestive tract. The narrow-band spectrum is conducive to enhancing the image of the gastrointestinal mucosa vessels. In some lesions with microvascular changes, the narrow-band imaging system has evident advantages over ordinary endoscopy in distinguishing lesions. Narrow-band endoscopic imaging has also been widely used in the fields of otolaryngology, respiratory tract, gynecological endoscopy, and laparoscopic surgery in addition to the digestive tract. Intraoperative laser confocal microscopy is a new type of imaging method. It can realize superficial tissue imaging in vivo and provide pathological information by using the principle of excited fluorescence imaging. This imaging method has high clarity due to the application of confocal imaging and can be used for lesion positioning. Near-infrared excited fluorescence imaging uses excitation fluorescence imaging equipment combined with corresponding fluorescent contrast agents (such as ICG(indocyanine green), and methylene blue) to achieve intraoperative specific imaging of lesions, tissues, and organs in vivo. The basic principle is to stimulate the contrast agent accumulated in the tissue, the fluorescent contrast agent emits a fluorescent signal, and real-time imaging is realized by collecting the signals. In clinical research, the near-infrared fluorescence imaging technology is often used for lymphatic vessel tracing and accurate tumor resection. Contrast agents have different imaging spectral bands; hence, the corresponding near-infrared fluorescence imaging equipment is also developing to a multichannel imaging mode to image substantial contrast agents and label multiple tissues in the same field of view specifically during surgery. Multichannel near-infrared fluorescent surgical navigation equipment that has been gradually developed can realize simultaneous fluorescence imaging of multiple organs and tissues. These intraoperative optical imaging technologies can assist doctors in accurately locating tumors, rapidly distinguishing between benign and malignant tissues, and detecting small lesions. They have gained benefits in many clinical applications. However, optical imaging is susceptible to interference from ambient light, and optical signals are difficult to propagate in tissues without optical signal absorption and scattering. Intraoperative optical imaging technologies have the problems of limited imaging quality and superficial tissue imaging. In clinical research, intelligent analysis of preoperative imaging is fiercely developing, while information analysis of intraoperative imaging is still lacking of powerful analytical tools and analytical methods. The study of effective intraoperative optical imaging analysis algorithms needs further exploration. Machine learning is a tool developed with the age of computer information technology and is expected to provide an effective solution to the abovementioned problems. With the accumulation and explosion of data volume, deep learning, as a type of machine learning, is an end-to-end algorithm. It can gain the internal relationship among things autonomously through network training, establish an empirical model, and realize the function of traditional algorithms. Deep learning has shown enhanced results in the analysis and processing of natural images and is being continuously promoted and applied to various fields. Machine learning provides powerful technical means for intelligent analysis, image processing, and three-dimensional reconstruction, but the application research of using machine learning in intraoperative optical imaging is relatively few. The addition of machine learning is expected to break through the bottleneck and promote the development of intraoperative optical imaging technologies. This article focuses on intraoperative optical imaging technologies and investigates the application of machine learning in this field in recent years, including optimizing intraoperative optical imaging quality, assisting intelligent analysis of intraoperative optical imaging, and promoting three-dimensional modeling of intraoperative optical imaging. In the field of machine learning for intraoperative optical imaging optimization, existing research includes target detection of specific tissues, such as soft tissue segmentation and image fusion, and optimization of imaging effects, such as resolution enhancement of near-infrared fluorescence imaging during surgery and intraoperative endoscopic smoke removal. Furthermore, machine learning assists doctors in performing intraoperative optical imaging analysis, including the identification of benign and malignant tissues and the classification of lesion types and grades. Therefore, it can provide a timely reference value for the surgeon to judge the state of the patient during the clinical operation and before the pathological examination. In the field of intraoperative optical imaging reconstruction, machine learning can be combined with preoperative images (such as computed tomography and magnetic resonance imaging) to assist in intraoperative soft tissue reconstruction, or it can be based on intraoperative images for three-dimensional reconstruction. It can be used for localization, three-dimensional organ morphology reconstruction, and tracking of intraoperative tissues and surgical instruments. Thus, machine learning is expected to provide corresponding technical foundation for robotic surgery and augmented reality surgery in the future.This article summarizes and analyzes the application of machine learning in the field of intraoperative optical imaging and describes the application prospects of deep learning. As a review, it investigates the application research of machine learning in intraoperative optical imaging mainly from three aspects: intraoperative optical image optimization, intelligent analysis of optical imaging, and three-dimensional reconstruction. We also introduce related research and expected effects in the above fields. At the end of this article, the application of machine learning in the field of intraoperative optical imaging technologies is discussed, and the advantages and possible problems of machine-learning methods are analyzed. Furthermore, this article elaborates the possible future development direction of intraoperative optical imaging combined with machine learning, providing a broad view for subsequent research.

Key words

intraoperative optical imaging; machine learning; imaging optimization; intelligent imaging analysis; 3D modeling

0 引言

手术是当今社会延长人类寿命最重要的手段之一,也是人类与病魔抗争史上的一项重大革新。从最初建在房屋顶部的简陋手术室,需要自然光来看清病人的组织结构,到后来的白炽灯,再到无影灯保证手术全程清晰无阴影,手术的变革与光的变换有着密不可分的关系。但如何在术中将病变组织看得更清晰,实现精准切除病变组织无残留,保护重要组织结构避免医源性损伤,一直是关键的挑战性问题,术中光学成像技术的出现为解决该挑战性问题提供了契机。术中光学成像技术是成像技术和临床医学的交叉领域,经过半个世纪的发展,主流术中光学成像技术仍然是应用于微创手术的普通白光腔镜成像,这种成像方式为微创手术提供了重要的成像画面,保障了微创手术的顺利进行。但在白光照射下,手术视野是一片或血红或暗红色的组织,医生主要依靠解剖学知识和触觉、视觉,以及主观经验判断,来区分肿瘤、血管等组织器官结构,目前仍缺乏一种客观判别复杂组织结构的术中实时导航方法(Glatz Dipl-Ing等,2014)。针对这一关键临床问题,学者在术中光学成像领域展开了诸多探索,包括已经广泛应用的白光腔镜,以及后来的窄带腔镜成像(Adler等,2008Machida等,2004)、激光共聚焦显微成像(Gerger等,2006)、近红外荧光成像(Keereweer等,2013Kitai等,2005Schaafsma等,2011Vahr- meijer等,2013)等,如图 1所示,上述方法均可实时定位病变组织,勾勒病变组织边界,识别肿瘤阳性切缘等。其中术中近红外荧光成像手术导航技术,因其安全无辐射、灵敏度高、简便易操作,且与分子探针结合可以特异性成像肿瘤及重要组织器官和神经脉管,成为手术过程中实时在体识别特定组织的重要工具(Tipirneni等,2017)。目前该项技术已经应用于临床,辅助医生进行术中前哨淋巴结示踪(Soltesz等,2005Troyan等,2009)、肿瘤检测(Vahrmeijer等,2013)、神经和胆管等重要组织器官成像(Ashitate等,2012Hyun等,2015)等。基于激发荧光成像的多通道光学成像技术(Troyan等,2009),以及近红外二区的荧光成像技术(Hu等,2020Zhang等,2018)也得到了相应的发展。

图 1 术中光学成像技术(Gerger等,2006Gotoh等,2016Machida等,2004)
Fig. 1 Intraoperative optical imaging technology (Gerger et al., 2016; Gotoh et al., 2016; Machida et al., 2004)

1 机器学习在术中光学成像中的应用

在人工智能领域,机器学习方法的发展为各行各业带来新的契机和挑战。其旨在让机器学会自主分析和处理数据的功能,从而可以解放更多的人力,精准快捷地完成任务并做出决策。传统的机器学习算法大多基于特征的提取,来进行分类决策等,但随互联网的兴起,数据量得到了巨大的积累,逐渐发展出区别于传统算法的新型深度学习方法,其机理在于通过建立网络模型,自主学习海量数据的信息,并分析数据内部的规律与联系,构建起知识表征模型,来辅助做出基于对象的优化和决策。同样,针对个别领域缺少数据集的问题,又衍生出无监督学习策略,旨在让机器自己生成数据或者自我学习。深度学习方法的兴起为越来越多的工程项目和研究领域带来优秀的研究成果。而人工智能的加入对术中光学成像领域的进步和发展也起到了辅助作用,具体表现在优化术中光学成像质量、辅助光学成像分析,以及促进光学成像3维建模等方面。

1.1 机器学习优化术中光学成像

机器学习优化术中光学成像,主要包括基于术中光学影像,实现对特定组织的目标检测、对重要组织器官的实时目标跟踪,以及对成像效果的提升。目标检测方面,Prokopetc等人(2015)提出了一种基于单目腹腔镜图像,自动检测子宫和输卵管连接处的方法,主要应用于术前子宫影像与腹腔镜影像的自动配准融合,用于影像引导手术,结果表明使用上下文约束是实现高质量检测的基础。目标跟踪方面,Selka等人(2015)提出了一种基于上下文的内窥镜图像递归特征跟踪算法,实现在微创手术中,对可变性组织的跟踪。成像优化方面,Chen等人(2020)提出了生成协作去雾网络(desmoke generative cooperative networks,De-smokeGCN),用于去除术中腔镜成像时产生的手术烟雾,从而优化成像画面,降低术中治疗时的烟雾干扰,采用自主生成仿真图像的方法用于网络训练,以及使用像素级烟雾检测网络和烟雾去除网络协同训练的方法,达到术中腔镜烟雾去除的目的,如图 2所示,即为De-smokeGCN方法在临床腔镜的应用实验,以及与其他去雾算法的对比。图 2从左至右纵列分别表示不同的术中烟雾图像示例:I light是轻度烟雾图像,I middle是中度烟雾图像,I fade是渐变烟雾图像,I irregular是不规则烟雾图像,I heavy是重度烟雾图像。图 2中从上至下横向代表不同的处理算法,比较方法从上至下依次为:原图像、暗通道先验算法(dark channel prior,DCP)、边界约束和上下文正则化算法(boundary constraint and contextual regularization,BCCR)、基于融合的变分图像去雾算法(fusion-based variational image dehazing,FVID)、环境光自动恢复算法(automatic recovery of atmospheric light,ATM)、色散先验算法(color attenuation prior,CAP)、基于密度评估的去雾算法(density of fog assessment based defogger,DEFADE)、增强型变分图像除雾算法(enhanced variational image dehazing,EVID)、非局部图像除雾(mon-local image dehazing,NLD)、图形模型和贝叶斯推断算法(graphical models and Bayesian inference,GMBI)、多合一除雾网络(all-in-one dehazing network,AOD-NET)、像素级条件对抗网络(pixel-to-pixel translation with conditional adversarial networks,PIX2PIX)、只有生成模块的De-smokeGCN网络、De-smokeGCN网络和烟雾评估图。此外,基于近红外激发荧光的术中光学成像技术常常受限于光学组织散射和光信号缺失等,使得荧光成像效果边界不清、模糊成像,产生分辨率低的低质成像问题。针对这一问题,Zhang等人(2019)使用基于生成对抗网络的机器学习方法,来实现荧光图像增强,锐化边缘,提升感知分辨率,优化成像质量。该网络通过学习大量的自然图像从低分辨率到高分辨率的变化过程,等比例优化图像;使用全梯度损失函数约束训练过程,减少假纹理的生成,并用在临床乳腺癌淋巴管荧光成像示踪上(如图 3所示),实现了术中荧光图像的分辨率增强。

图 2 深度学习用于术中腔镜去雾(Chen等,2020)
Fig. 2 Deep learning for intraoperative endoscopic de-smoking(Chen et al., 2020)
图 3 深度学习用于乳腺癌术中荧光成像分辨率增强(Zhang等,2019)
Fig. 3 Deep learning for resolution enhancement of fluorescent imaging during breast cancer surgery (Zhang et al., 2019)
((a)original image; (b)processed image)

1.2 机器学习辅助术中光学成像分析

机器学习辅助医生进行术中光学成像分析,包括病变类型和病变等级的分类、区分良恶性组织等。因外科医生对于组织病理学上的识别具有个体差异性,深度学习方法可以辅助临床医生做出客观性判断,如Li等人(2018)提出将深度学习应用于神经外科肿瘤手术中,使用基于激光共聚焦显微内镜获取的术中成像数据和深度学习算法,实现对肿瘤组织的分类识别。此外,Fei等人(2017)提出使用无标签的高光谱成像分析方法,用于肿瘤的边缘评估,并在癌症患者的手术标本上开展初步研究,证明了该方法的可行性,有望用于临床手术中。Halicek等人(2017)同样使用高光谱成像技术,研发了一种基于卷积神经网络(convolutional neural networks, CNN)的分类器,可以实现对鳞状细胞癌、甲状腺癌和正常头颈部组织进行分类,50例患者的初步临床结果表明,高光谱成像结合深度学习,在头颈部手术的组织自动识别和标记方面具有应用潜力。Aubreville等人(2017)使用7 894个口腔鳞状细胞癌患者的共聚焦激光显微内镜图像对CNN网络模型进行训练,训练结果表明该方法可以实现对恶性癌灶的识别,准确率为0.88。此外,还有研究团队训练了CNN网络,用于结肠镜临床检查中,来检测结直肠息肉,该项工作中训练集和验证集共包含8 641幅图像,一半为包含各种大小和形态的息肉,另一半不包含息肉,实验结果表明,基于深度学习的分类方法可以达到91 %的分类准确度,曲线下面积评估(area under the curve,AUC)达到0.96,且算法具有较快的处理速度,有望用于实时成像视频中,对息肉的实现识别(Karnes等,2017)。诸如此类的借助深度学习的分类和分析方法,可以显著提高医生的检出速度,伴随分析算法和技术的进一步发展,有望用于更复杂的手术过程中,进行实时病灶识别,辅助医生更好地做出临床决策。

1.3 机器学习辅助术中光学成像建模

在人工智能辅助术中光学成像建模方面,通过增加术前计算机断层成像(computer tomography,CT)、核磁共振成像(magnetic resonance imaging,MRI)等先验知识,定位病变区域,实现3维表面建模或者3维整体建模,从而更直观地辅助临床医生对手术视野,以及视野盲区进行判断和决策。在成像引导的手术中,特别是未来可能出现的基于增强现实(augmented reality,AR)的应用中,实现软组织变形的实时精确重建和可视化是至关重要的,Tonutti等人(2017)提出了一种结合预计算有限元法的机器学习算法,推导出脑病理患者特异性变形模型。该模型可以实现实时计算,图 4为重建结果,其精度可与传统的有限元模型相媲美(Tonutti等,2017)。Lorente等人(2017)提出了一种利用机器学习实时模拟人类肝脏在呼吸过程中的生物力学行为的模型,并研究对比了不同的机器学习回归模型,包括基于决策树、随机森林和极度随机树的方法,以及另外两种简单的回归技术,包括虚拟模型和线性回归,所取得的成果为将来开发能够模拟临床干预呼吸过程中人体肝脏变形的实时软件奠定了基础。Bhandarkar等人(2007)对计算机视觉引导下的虚拟颅面重建领域展开了研究,提出了混合数据对齐刚性约束穷极搜索—迭代最近点(data aligned rigidity constrained exhaustive search-the iterative closest poin, DARCES-ICP)算法,该混合算法可以实现更精确的下颌重建精度。Kiraly等人(2004)将机器学习方法应用于多检测器计算机断层扫描(multi-detector computed tomography,MDCT)结合支气管镜的肺癌评估中,提供了一种快速、鲁棒的虚拟支气管镜3维路径规划方法, 使用一组人类MDCT图像,将该方法与之前提出的路径规划方法进行比较,证实了该方法的有效性。基于CNN的深度学习模型,可以识别或者跟踪腹腔镜手术中正在使用的手术器械(抓钩,钩子和剪刀等),以及正在进行的手术动作如钝性解剖、切割和缝合等(Choi等,2017Pakhomov等,2017Petscharnig和Schöffmannffmann,2018),这些为机器人手术设备的研发和发展提供了技术基础。

图 4 深度学习用于软组织重建(Tonutti等,2017)
Fig. 4 Deep learning for soft tissue reconstruction (Tonutti et al., 2017)

2 结语

术中光学成像技术为临床手术带来诸多便利和优势,在术中成像技术中融入机器学习的方法,可以进一步在成像优化、成像分析和成像建模等方面取得技术的进步和提升。可以预见的是,机器学习方法与术中光学成像技术的结合,可以更自动地检测感兴趣区域,提高成像信背比(信号与背景的比值)和成像质量,更智能地分析术中影像,从而为外科医生提供更多的技术支持。伴随AR技术在手术领域中的应用,机器学习算法可以辅助自动或半自动手术机器人,精准地跟踪定位手术器械,正确地执行手术任务,并可以对即将发生的动作进行预判,降低手术器械对关键血管和重要组织结构的伤害风险,从而增强手术的安全性,保障手术的安全进行。但基于深度学习的术中光学成像领域仍然处于初步研发阶段,诸多成果仍处于预临床或临床小样本阶段,一个主要的问题在于复杂的应用环境和深度学习方法的泛化能力较弱。伴随技术的不断更新换代,以及对应用领域的深入探索,或许会给这些问题带来更多的解决办法,相信深度学习结合术中光学成像会得到更多更好的发展。

参考文献

  • Adler A, Pohl H, Papanikolaou I S, Abou-Rebyeh H, Schachschal G, Veltzke-Schlieker W, Khalifa A C, Setka E, Koch M, Wiedenmann B and Rösch T. 2008. A prospective randomised study on narrow-band imaging versus conventional colonoscopy for adenoma detection: does narrow-band imaging induce a learning effect? Gut, 57(1): 59-64[DOI: 10.1136/gut.2007.123539]
  • Ashitate Y, Stockdale A, Choi H S, Laurence R G, Frangioni J V. 2012. Real-time simultaneous near-infrared fluorescence imaging of bile duct and arterial anatomy. Journal of Surgical Research, 176(1): 7-13 [DOI:10.1016/j.jss.2011.06.027]
  • Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, Bohr C, Neumann H, Stelzle F, Maier A. 2017. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Scientific Reports, 7(1): #11979 [DOI:10.1038/s41598-017-12320-8]
  • Bhandarkar S M, Chowdhury A S, Tang Y R, Yu J C, Tollner E W. 2007. Computer vision guided virtual craniofacial reconstruction. Computerized Medical Imaging and Graphics, 31(6): 418-427 [DOI:10.1016/j.compmedimag.2007.03.003]
  • Chen L, Tang W, John N W, Wan T R, Zhang J J. 2020. De-smokeGCN:generative cooperative networks for joint surgical smoke detection and removal. IEEE Transactions on Medical Imaging, 39(5): 1615-1625 [DOI:10.1109/TMI.2019.2953717]
  • Choi B, Jo K, Choi S and Choi J. 2017. Surgical-tools detection based on Convolutional Neural Network in laparoscopic robot-assisted surgery//Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Seogwipo: IEEE: #8037183[DOI: 10.1109/embc.2017.8037183]
  • Fei B W, Lu G L, Wang X, Zhang H Z, Little J V, Patel M R, Griffith C C, El-Diery M W, Chen A Y. 2017. Label-free reflectance hyperspectral imaging for tumor margin assessment:a pilot study on surgical specimens of cancer patients. Journal of Biomedical Optics, 22(8): 1-7 [DOI:10.1117/1.JBO.22.8.086009]
  • Gerger A, Koller S, Weger W, Richtig E, Kerl H, Samonigg H, Krippl P, Smolle J. 2006. Sensitivity and specificity of confocal laser-scanning microscopy for in vivo diagnosis of malignant skin tumors. Cancer, 107(1): 193-200 [DOI:10.1002/cncr.21910]
  • Glatz Dipl-Ing J, Garcia-Allende P B, Becker V, Koch M, Meining A, Ntziachristos V. 2014. Near-infrared fluorescence cholangiopancreatoscopy:initial clinical feasibility results. Gastrointestinal Endoscopy, 79(4): 664-668 [DOI:10.1016/j.gie.2013.10.008]
  • Gotoh K, Kobayashi S, Marubashi S, Yamada T, Akita H, Takahashi H, Yano M, Ishikawa O and Sakon M. 2016. Intraoperative detection of hepatocellular carcinoma using indocyanine green fluorescence imaging//Kusano M, Kokudo N, Toi M and Kaibori M, eds. ICG Fluorescence Imaging and Navigation Surgery. Tokyo: Springer: 325-334[DOI: 10.1007/978-4-431-55528-5_29]
  • Halicek M, Lu G L, Little J V, Wang X, Patel M, Griffith C C, El-Deiry M, Chen A Y, Fei B W. 2017. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. Journal of Biomedical Optics, 22(6): #060503 [DOI:10.1117/1.JBO.22.6.060503]
  • Hu Z H, Fang C, Li B, Zhang Z Y, Cao C G, Cai M S, Su S, Sun X W, Shi X J, Li C, Zhou T J, Zhang Y X, Chi C W, He P, Xia X M, Chen Y, Gambhir S S, Cheng Z, Tian J. 2020. First-in-human liver-tumour surgery guided by multispectral fluorescence imaging in the visible and near-infrared-I/II windows. Nature Biomedical Engineering, 4(3): 259-271 [DOI:10.1038/s41551-019-0494-0]
  • Hyun H, Park M H, Owens E A, Wada H, Henary M, Handgraaf H J M, Vahrmeijer A L, Frangioni J V, Choi H S. 2015. Structure-inherent targeting of near-infrared fluorophores for parathyroid and thyroid gland imaging. Nature Medicine, 21(2): 192-197 [DOI:10.1038/nm.3728]
  • Karnes W E, Alkayali T, Mittal M, Patel A, Kim J, Chang K J, Ninh A Q, Urban G, Baldi P. 2017. Su1642 automated polyp detection using deep learning:leveling the field. Gastrointestinal Endoscopy, 85(5): AB376-AB377 [DOI:10.1016/j.gie.2017.03.871]
  • Keereweer S, van Driel P B A A, Snoeks T J A, Kerrebijn J D F, Baatenburg de Jong R J, Vahrmeijer A L, Sterenborg H J C M, Löwik C W G M. 2013. Optical image-guided cancer surgery:challenges and limitations. Clinical Cancer Research, 19(14): 3745-3754 [DOI:10.1158/1078-0432.CCR-12-3598]
  • Kiraly A P, Helferty J P, Hoffman E A, McLennan G, Higgins W E. 2004. Three-dimensional path planning for virtual bronchoscopy. IEEE Transactions on Medical Imaging, 23(11): 1365-1379 [DOI:10.1109/TMI.2004.829332]
  • Kitai T, Inomoto T, Miwa M, Shikayama T. 2005. Fluorescence navigation with indocyanine green for detecting sentinel lymph nodes in breast cancer. Breast Cancer, 12(3): 211-215 [DOI:10.2325/jbcs.12.211]
  • Li Y C, Charalampaki P, Liu Y, Yang G Z, Giannarou S. 2018. Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data. International Journal of Computer Assisted Radiology and Surgery, 13(8): 1187-1199 [DOI:10.1007/s11548-018-1806-7]
  • Lorente D, Martínez-Martínez F, Rupérez M J, Lago M A, Martínez-Sober M, Escandell-Montero P, Martínez-Martínez J M, Martínez-Sanchis S, Serrano-López A J, Monserrat C, Martín-Guerrero J D. 2017. A framework for modelling the biomechanical behaviour of the human liver during breathing in real time using machine learning. Expert Systems with Applications, 71: 342-357 [DOI:10.1016/j.eswa.2016.11.037]
  • Machida H, Sano Y, Hamamoto Y, Muto M, Kozu T, Tajiri H, Yoshida S. 2004. Narrow-band imaging in the diagnosis of colorectal mucosal lesions:a pilot study. Endoscopy, 36(12): 1094-1098 [DOI:10.1055/s-2004-826040]
  • Pakhomov D, Premachandran V, Allan M, Azizian M and Navab N. 2017. Deep residual learning for instrument segmentation in robotic surgery[EB/OL].[2020-05-11]. https://arxiv.org/pdf/1703.08580.pdf
  • Petscharnig S, Schöffmann K. 2018. Learning laparoscopic video shot classification for gynecological surgery. Multimedia Tools and Applications, 77(7): 8061-8079 [DOI:10.1007/s11042-017-4699-5]
  • Prokopetc K, Collins T and Bartoli A. 2015. Automatic detection of the uterus and fallopian tube junctions in laparoscopic images//Proceedings of the 24th International Conference on Information Processing in Medical Imaging. Sabhal Mor Ostaig: Springer: 552-563[DOI: 10.1007/978-3-319-19992-4_43]
  • Schaafsma B E, Mieog J S D, Hutteman M, van der Vorst J R, Kuppen P J K, Löwik C W G M, Frangioni J V, van de Velde C J H, Vahrmeijer A L. 2011. The clinical use of indocyanine green as a near-infrared fluorescent contrast agent for image-guided oncologic surgery. Journal of Surgical Oncology, 104(3): 323-332 [DOI:10.1002/jso.21943]
  • Selka F, Nicolau S, Agnus V, Bessaid A, Marescaux J, Soler L. 2015. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology. Computerized Medical Imaging and Graphics, 40: 49-61 [DOI:10.1016/j.compmedimag.2014.11.012]
  • Soltesz E G, Kim S, Laurence R G, DeGrand A M, Parungo C P, Dor D M, Cohn L H, Bawendi M G, Frangioni J V, Mihaljevic T. 2005. Intraoperative sentinel lymph node mapping of the lung using near-infrared fluorescent quantum dots. The Annals of Thoracic Surgery, 79(1): 269-277 [DOI:10.1016/j.athoracsur.2004.06.055]
  • Tipirneni K E, Warram J M, Moore L S, Prince A C, De Boer E, Jani A, Wapnir I L, Liao J C, Bouvet M, Behnke N K, Hawn M T, Poultsides G A, Vahrmeijer A L, Carroll W R, Zinn K R, Rosenthal E. 2017. Oncologic procedures amenable to fluorescence-guided surgery. Annals of Surgery, 266(1): 36-47 [DOI:10.1097/SLA.0000000000002127]
  • Tonutti M, Gras G, Yang G Z. 2017. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artificial Intelligence in Medicine, 80: 39-47 [DOI:10.1016/j.artmed.2017.07.004]
  • Troyan S L, Kianzad V, Gibbs-Strauss S L, Gioux S, Matsui A, Oketokoun R, Ngo L, Khamene A, Azar F, Frangioni J V. 2009. The FLARETM intraoperative near-infrared fluorescence imaging system:a first-in-human clinical trial in breast cancer sentinel lymph node mapping. Annals of Surgical Oncology, 16(10): 2943-2952 [DOI:10.1245/s10434-009-0594-2]
  • Vahrmeijer A L, Hutteman M, van der Vorst J R, van de Velde C J H, Frangioni J V. 2013. Image-guided cancer surgery using near-infrared fluorescence. Nature Reviews Clinical Oncology, 10(9): 507-518 [DOI:10.1038/nrclinonc.2013.123]
  • Zhang C, Wang K, An Y, He K S, Tong T, Tian J. 2019. Improved generative adversarial networks using the total gradient loss for the resolution enhancement of fluorescence images. Biomedical Optics Express, 10(9): 4742-4756 [DOI:10.1364/BOE.10.004742]
  • Zhang M X, Yue J Y, Cui R, Ma Z R, Wan H, Wang F F, Zhu S J, Zhou Y, Kuang Y, Zhong Y T, Pang D W, Dai H J. 2018. Bright quantum dots emitting at~1 600 nm in the NIR-IIb window for deep tissue fluorescence imaging. Proceedings of the National Academy of Sciences of the United States of America, 115(26): 6590-6595 [DOI:10.1073/pnas.1806153115]