机器学习在术中光学成像技术中的应用研究
Review: the application of machine learning in intraoperative optical imaging technologies
- 2020年25卷第10期 页码:1994-2001
收稿:2020-06-11,
修回:2020-7-13,
录用:2020-7-20,
纸质出版:2020-10-16
DOI: 10.11834/jig.200291
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-06-11,
修回:2020-7-13,
录用:2020-7-20,
纸质出版:2020-10-16
移动端阅览
术中光学成像技术的兴起为临床手术提供了更加便捷和直观的观察手段。传统的术中光学成像方法包括开放式光学成像和术中腔镜、内镜成像等,这些方法保障了临床手术的顺利进行,同时也促进了微创手术的发展。随后发展起来的术中光学成像技术还有窄带腔镜成像、术中激光共聚焦显微成像和近红外激发荧光成像等。术中光学成像技术可以辅助医生精准定位肿瘤、快速区分良恶性组织和检测微小病灶等,在诸多临床应用领域表现出了较好的应用效果。但术中光学成像技术也存在成像质量受限、缺乏有力的成像分析工具,以及只能成像表浅组织的问题。机器学习的加入,有望突破瓶颈,进一步推动术中光学成像技术的发展。本文针对术中光学成像技术,对机器学习在这一领域的应用研究展开调研,具体包括:机器学习对术中光学成像质量的优化、辅助术中光学成像的智能分析,以及辅助基于术中光学影像的3维建模等内容。本文对机器学习在术中光学成像领域的应用进行总结和分析,特别叙述了深度学习方法在该领域的应用前景,为后续的研究提供更宽泛的思路。
The rise of intraoperative optical imaging technologies provides a convenient and intuitive observation method for clinical surgery. Traditional intraoperative optical imaging methods include open optical and intraoperative endoscopic imaging. These methods ensure the smooth implementation of clinical surgery and promote the development of minimally invasive surgery. Subsequent methods include narrow-band endoscopic
intraoperative laser confocal microscopy
and near-infrared excited fluorescence imaging. Narrow-band endoscopic imaging uses a filter to filter out the broad-band spectrum emitted by the endoscope light source
leaving only the narrow-band spectrum for the diagnosis of various diseases of the digestive tract. The narrow-band spectrum is conducive to enhancing the image of the gastrointestinal mucosa vessels. In some lesions with microvascular changes
the narrow-band imaging system has evident advantages over ordinary endoscopy in distinguishing lesions. Narrow-band endoscopic imaging has also been widely used in the fields of otolaryngology
respiratory tract
gynecological endoscopy
and laparoscopic surgery in addition to the digestive tract. Intraoperative laser confocal microscopy is a new type of imaging method. It can realize superficial tissue imaging in vivo and provide pathological information by using the principle of excited fluorescence imaging. This imaging method has high clarity due to the application of confocal imaging and can be used for lesion positioning. Near-infrared excited fluorescence imaging uses excitation fluorescence imaging equipment combined with corresponding fluorescent contrast agents (such as ICG(indocyanine green)
and methylene blue) to achieve intraoperative specific imaging of lesions
tissues
and organs in vivo. The basic principle is to stimulate the contrast agent accumulated in the tissue
the fluorescent contrast agent emits a fluorescent signal
and real-time imaging is realized by collecting the signals. In clinical research
the near-infrared fluorescence imaging technology is often used for lymphatic vessel tracing and accurate tumor resection. Contrast agents have different imaging spectral bands; hence
the corresponding near-infrared fluorescence imaging equipment is also developing to a multichannel imaging mode to image substantial contrast agents and label multiple tissues in the same field of view specifically during surgery. Multichannel near-infrared fluorescent surgical navigation equipment that has been gradually developed can realize simultaneous fluorescence imaging of multiple organs and tissues. These intraoperative optical imaging technologies can assist doctors in accurately locating tumors
rapidly distinguishing between benign and malignant tissues
and detecting small lesions. They have gained benefits in many clinical applications. However
optical imaging is susceptible to interference from ambient light
and optical signals are difficult to propagate in tissues without optical signal absorption and scattering. Intraoperative optical imaging technologies have the problems of limited imaging quality and superficial tissue imaging. In clinical research
intelligent analysis of preoperative imaging is fiercely developing
while information analysis of intraoperative imaging is still lacking of powerful analytical tools and analytical methods. The study of effective intraoperative optical imaging analysis algorithms needs further exploration. Machine learning is a tool developed with the age of computer information technology and is expected to provide an effective solution to the abovementioned problems. With the accumulation and explosion of data volume
deep learning
as a type of machine learning
is an end-to-end algorithm. It can gain the internal relationship among things autonomously through network training
establish an empirical model
and realize the function of traditional algorithms. Deep learning has shown enhanced results in the analysis and processing of natural images and is being continuously promoted and applied to various fields. Machine learning provides powerful technical means for intelligent analysis
image processing
and three-dimensional reconstruction
but the application research of using machine learning in intraoperative optical imaging is relatively few. The addition of machine learning is expected to break through the bottleneck and promote the development of intraoperative optical imaging technologies. This article focuses on intraoperative optical imaging technologies and investigates the application of machine learning in this field in recent years
including optimizing intraoperative optical imaging quality
assisting intelligent analysis of intraoperative optical imaging
and promoting three-dimensional modeling of intraoperative optical imaging. In the field of machine learning for intraoperative optical imaging optimization
existing research includes target detection of specific tissues
such as soft tissue segmentation and image fusion
and optimization of imaging effects
such as resolution enhancement of near-infrared fluorescence imaging during surgery and intraoperative endoscopic smoke removal. Furthermore
machine learning assists doctors in performing intraoperative optical imaging analysis
including the identification of benign and malignant tissues and the classification of lesion types and grades. Therefore
it can provide a timely reference value for the surgeon to judge the state of the patient during the clinical operation and before the pathological examination. In the field of intraoperative optical imaging reconstruction
machine learning can be combined with preoperative images (such as computed tomography and magnetic resonance imaging) to assist in intraoperative soft tissue reconstruction
or it can be based on intraoperative images for three-dimensional reconstruction. It can be used for localization
three-dimensional organ morphology reconstruction
and tracking of intraoperative tissues and surgical instruments. Thus
machine learning is expected to provide corresponding technical foundation for robotic surgery and augmented reality surgery in the future.This article summarizes and analyzes the application of machine learning in the field of intraoperative optical imaging and describes the application prospects of deep learning. As a review
it investigates the application research of machine learning in intraoperative optical imaging mainly from three aspects: intraoperative optical image optimization
intelligent analysis of optical imaging
and three-dimensional reconstruction. We also introduce related research and expected effects in the above fields. At the end of this article
the application of machine learning in the field of intraoperative optical imaging technologies is discussed
and the advantages and possible problems of machine-learning methods are analyzed. Furthermore
this article elaborates the possible future development direction of intraoperative optical imaging combined with machine learning
providing a broad view for subsequent research.
Adler A, Pohl H, Papanikolaou I S, Abou-Rebyeh H, Schachschal G, Veltzke-Schlieker W, Khalifa A C, Setka E, Koch M, Wiedenmann B and Rösch T. 2008. A prospective randomised study on narrow-band imaging versus conventional colonoscopy for adenoma detection: does narrow-band imaging induce a learning effect? Gut, 57(1): 59-64[ DOI: 10.1136/gut.2007.123539 http://dx.doi.org/10.1136/gut.2007.123539 ]
Ashitate Y, Stockdale A, Choi H S, Laurence R G and Frangioni J V. 2012. Real-time simultaneous near-infrared fluorescence imaging of bile duct and arterial anatomy. Journal of Surgical Research, 176(1):7-13[DOI:10.1016/j.jss.2011.06.027]
Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, Bohr C, Neumann H, Stelzle F and Maier A. 2017. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Scientific Reports, 7(1):#11979[DOI:10.1038/s41598-017-12320-8]
Bhandarkar S M, Chowdhury A S, Tang Y R, Yu J C and Tollner E W. 2007. Computer vision guided virtual craniofacial reconstruction. Computerized Medical Imaging and Graphics, 31(6):418-427[DOI:10.1016/j.compmedimag.2007.03.003]
Chen L, Tang W, John N W, Wan T R and Zhang J J. 2020. De-smokeGCN:generative cooperative networks for joint surgical smoke detection and removal. IEEE Transactions on Medical Imaging, 39(5):1615-1625[DOI:10.1109/TMI.2019.2953717]
Choi B, Jo K, Choi S and Choi J. 2017. Surgical-tools detection based on Convolutional Neural Network in laparoscopic robot-assisted surgery//Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Seogwipo: IEEE: #8037183[ DOI: 10.1109/embc.2017.8037183 http://dx.doi.org/10.1109/embc.2017.8037183 ]
Fei B W, Lu G L, Wang X, Zhang H Z, Little J V, Patel M R, Griffith C C, El-Diery M W and Chen A Y. 2017. Label-free reflectance hyperspectral imaging for tumor margin assessment:a pilot study on surgical specimens of cancer patients. Journal of Biomedical Optics, 22(8):1-7[DOI:10.1117/1.JBO.22.8.086009]
Gerger A, Koller S, Weger W, Richtig E, Kerl H, Samonigg H, Krippl P and Smolle J. 2006. Sensitivity and specificity of confocal laser-scanning microscopy for in vivo diagnosis of malignant skin tumors. Cancer, 107(1):193-200[DOI:10.1002/cncr.21910]
Glatz Dipl-Ing J, Garcia-Allende P B, Becker V, Koch M, Meining A and Ntziachristos V. 2014. Near-infrared fluorescence cholangiopancreatoscopy:initial clinical feasibility results. Gastrointestinal Endoscopy, 79(4):664-668[DOI:10.1016/j.gie.2013.10.008]
Gotoh K, Kobayashi S, Marubashi S, Yamada T, Akita H, Takahashi H, Yano M, Ishikawa O and Sakon M. 2016. Intraoperative detection of hepatocellular carcinoma using indocyanine green fluorescence imaging//Kusano M, Kokudo N, Toi M and Kaibori M, eds. ICG Fluorescence Imaging and Navigation Surgery. Tokyo: Springer: 325-334[ DOI: 10.1007/978-4-431-55528-5_29 http://dx.doi.org/10.1007/978-4-431-55528-5_29 ]
Halicek M, Lu G L, Little J V, Wang X, Patel M, Griffith C C, El-Deiry M, Chen A Y and Fei B W. 2017. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. Journal of Biomedical Optics, 22(6):#060503[DOI:10.1117/1.JBO.22.6.060503]
Hu Z H, Fang C, Li B, Zhang Z Y, Cao C G, Cai M S, Su S, Sun X W, Shi X J, Li C, Zhou T J, Zhang Y X, Chi C W, He P, Xia X M, Chen Y, Gambhir S S, Cheng Z and Tian J. 2020. First-in-human liver-tumour surgery guided by multispectral fluorescence imaging in the visible and near-infrared-I/II windows. Nature Biomedical Engineering, 4(3):259-271[DOI:10.1038/s41551-019-0494-0]
Hyun H, Park M H, Owens E A, Wada H, Henary M, Handgraaf H J M, Vahrmeijer A L, Frangioni J V and Choi H S. 2015. Structure-inherent targeting of near-infrared fluorophores for parathyroid and thyroid gland imaging. Nature Medicine, 21(2):192-197[DOI:10.1038/nm.3728]
Karnes W E, Alkayali T, Mittal M, Patel A, Kim J, Chang K J, Ninh A Q, Urban G and Baldi P. 2017. Su1642 automated polyp detection using deep learning:leveling the field. Gastrointestinal Endoscopy, 85(5):AB376-AB377[DOI:10.1016/j.gie.2017.03.871]
Keereweer S, van Driel P B AA, Snoeks T J A, Kerrebijn J D F, Baatenburg de Jong R J, Vahrmeijer A L, Sterenborg H J C M and Löwik C W G M. 2013. Optical image-guided cancer surgery:challenges and limitations. Clinical Cancer Research, 19(14):3745-3754[DOI:10.1158/1078-0432.CCR-12-3598]
Kiraly A P, Helferty J P, Hoffman E A, McLennan G and Higgins W E. 2004. Three-dimensional path planning for virtual bronchoscopy. IEEE Transactions on Medical Imaging, 23(11):1365-1379[DOI:10.1109/TMI.2004.829332]
Kitai T, Inomoto T, Miwa M and Shikayama T. 2005. Fluorescence navigation with indocyanine green for detecting sentinel lymph nodes in breast cancer. Breast Cancer, 12(3):211-215[DOI:10.2325/jbcs.12.211]
Li Y C, Charalampaki P, Liu Y, Yang G Z and Giannarou S. 2018. Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data. International Journal of Computer Assisted Radiology and Surgery, 13(8):1187-1199[DOI:10.1007/s11548-018-1806-7]
Lorente D, Martínez-Martínez F, Rupérez M J, Lago M A, Martínez-Sober M, Escandell-Montero P, Martínez-Martínez J M, Martínez-Sanchis S, Serrano-López A J, Monserrat C and Martín-Guerrero J D. 2017. A framework for modelling the biomechanical behaviour of the human liver during breathing in real time using machine learning. Expert Systems with Applications, 71:342-357[DOI:10.1016/j.eswa.2016.11.037]
Machida H, Sano Y, Hamamoto Y, Muto M, Kozu T, Tajiri H and Yoshida S. 2004. Narrow-band imaging in the diagnosis of colorectal mucosal lesions:a pilot study. Endoscopy, 36(12):1094-1098[DOI:10.1055/s-2004-826040]
Pakhomov D, Premachandran V, Allan M, Azizian M and Navab N. 2017. Deep residual learning for instrument segmentation in robotic surgery[EB/OL].[2020-05-11] . https://arxiv.org/pdf/1703.08580.pdf https://arxiv.org/pdf/1703.08580.pdf
Petscharnig S and Schöffmann K. 2018. Learning laparoscopic video shot classification for gynecological surgery. Multimedia Tools and Applications, 77(7):8061-8079[DOI:10.1007/s11042-017-4699-5]
Prokopetc K, Collins T and Bartoli A. 2015. Automatic detection of the uterus and fallopian tube junctions in laparoscopic images//Proceedings of the 24th International Conference on Information Processing in Medical Imaging. Sabhal Mor Ostaig: Springer: 552-563[ DOI: 10.1007/978-3-319-19992-4_43 http://dx.doi.org/10.1007/978-3-319-19992-4_43 ]
Schaafsma B E, Mieog J S D, Hutteman M, van der Vorst J R, Kuppen P J K, Löwik C W G M, Frangioni J V, van de Velde C J H and Vahrmeijer A L. 2011. The clinical use of indocyanine green as a near-infrared fluorescent contrast agent for image-guided oncologic surgery. Journal of Surgical Oncology, 104(3):323-332[DOI:10.1002/jso.21943]
Selka F, Nicolau S, Agnus V, Bessaid A, Marescaux J and Soler L. 2015. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology. Computerized Medical Imaging and Graphics, 40:49-61[DOI:10.1016/j.compmedimag.2014.11.012]
SolteszE G, Kim S, Laurence R G, DeGrand A M, Parungo C P, Dor D M, Cohn L H, Bawendi M G, Frangioni J V and Mihaljevic T. 2005. Intraoperative sentinel lymph node mapping of the lung using near-infrared fluorescent quantum dots. The Annals of Thoracic Surgery, 79(1):269-277[DOI:10.1016/j.athoracsur.2004.06.055]
Tipirneni K E, Warram J M, Moore L S, Prince A C, De Boer E, Jani A, Wapnir I L, Liao J C, Bouvet M, Behnke N K, Hawn M T, Poultsides G A, Vahrmeijer A L, Carroll W R, Zinn K R and Rosenthal E. 2017. Oncologic procedures amenable to fluorescence-guided surgery. Annals of Surgery, 266(1):36-47[DOI:10.1097/SLA.0000000000002127]
Tonutti M, Gras G and Yang G Z. 2017. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artificial Intelligence in Medicine, 80:39-47[DOI:10.1016/j.artmed.2017.07.004]
Troyan S L, Kianzad V, Gibbs-Strauss S L, Gioux S, Matsui A, Oketokoun R, Ngo L, Khamene A, Azar F and Frangioni J V. 2009. The FLARE TM intraoperative near-infrared fluorescence imaging system:a first-in-human clinical trial in breast cancer sentinel lymph node mapping. Annals of Surgical Oncology, 16(10):2943-2952[DOI:10.1245/s10434-009-0594-2]
Vahrmeijer A L, Hutteman M, van der Vorst J R, van de Velde C J H and Frangioni J V. 2013. Image-guided cancer surgery using near-infrared fluorescence. Nature Reviews Clinical Oncology, 10(9):507-518[DOI:10.1038/nrclinonc.2013.123]
Zhang C, Wang K, An Y, He K S, Tong T and Tian J. 2019. Improved generative adversarial networks using the total gradient loss for the resolution enhancement of fluorescence images. Biomedical Optics Express, 10(9):4742-4756[DOI:10.1364/BOE.10.004742]
Zhang M X, Yue J Y, Cui R, Ma Z R, Wan H, Wang F F, ZhuS J, Zhou Y, Kuang Y, Zhong Y T, Pang D W and Dai H J. 2018. Bright quantum dots emitting at~1 600 nm in the NIR-IIb window for deep tissue fluorescence imaging. Proceedings of the National Academy of Sciences of the United States of America, 115(26):6590-6595[DOI:10.1073/pnas.1806153115]
相关作者
相关机构
京公网安备11010802024621