深度学习与影像自动化评估的肾肿瘤剜除术难度预分析
Pre analysis of difficulty in renal tumor enucleation surgery based on deep learning and image automation evaluation
- 2023年28卷第8期 页码:2461-2475
纸质出版日期: 2023-08-16
DOI: 10.11834/jig.220375
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-08-16 ,
移动端阅览
刘云鹏, 吴铁林, 蔡文立, 王仁芳, 孙德超, 干开丰, 李瑾, 金冉, 邱虹, 徐惠霞. 2023. 深度学习与影像自动化评估的肾肿瘤剜除术难度预分析. 中国图象图形学报, 28(08):2461-2475
Liu Yunpeng, Wu Tielin, Cai Wenli, Wang Renfang, Sun Dechao, Gan Kaifeng, Li Jin, Jin Ran, Qiu Hong, Xu Huixia. 2023. Pre analysis of difficulty in renal tumor enucleation surgery based on deep learning and image automation evaluation. Journal of Image and Graphics, 28(08):2461-2475
目的
2
早期肾癌可以通过肾肿瘤剜除术进行有效治疗,为了降低手术难度和减少手术并发症,需要对手术的难度进行合理有效的评估。本文将深度学习、医学影像组学和图像分析技术进行结合,提出一种基于CT(computed tomography)影像的肾肿瘤剜除术难度自动评估方法。
方法
2
首先建立一个级联的端到端分割模型对肾脏、肾肿瘤和腹壁同时进行分割,同时融入子像素卷积与注意力机制,保证了小体积肿瘤分割的精确性;然后使用影像组学特征对误判的肾肿瘤进行去除;最后依据分割结果,采用国际标准的梅奥肾周粘连概率(Mayo adhesive probability,MAP)评分和R.E.N.A.L评分对肾脏和肾肿瘤进行自动化的评估计算,并根据计算结果得出肾肿瘤剜除术难度。
结果
2
将实验的自动化评估结果与三甲医院泌尿科的3位医疗专家的结果进行对比,从预测的平均结果来看,超过两个专家,与最好的专家相差仅0.1%。平均预测时间,单个肿瘤约为244 ms,标准差只有8 ms,专家评估时间约为26 s,标准差在3 s左右,自动评估速度是人工的108倍左右。
结论
2
自动化评估结果整体上与专家评估水平基本一致,同时评估速度更加快速稳定,可以有效替代专家进行自动化评估,为术前准确诊断、手术方案个体化规划和手术入路选择提供准确可靠的决策支持,给手术难度诊断评估提供智能化的医疗解决方案。
Objective
2
Early renal cancer can be identified and treated effectively via enucleation of renal tumor. To optimize surgery and its surgical complications, it is necessary to evaluate the surgical feasibility efficiently and effectively. To quantify the difficulty index of surgical contexts, Mayo adhesive probability (MAP) score and R.E.N.A.L score can be involved in for its applications. computed tomography(CT) images-based manual analysis is roughly estimated in terms of these two scoring standards-related corresponding difficulty score. To optimize the accuracy and reliability of evaluation, this sort of qualitative manual evaluation method is time-consuming and labor-intensive. Thanks to deep learning technique based medical radiomics and image analysis, we develop an automatic evaluation method of CT images-based surgical optimization in relevance to enucleation of renal tumors.
Method
2
First, a three-layer cascade end-to-end segmentation model is illustrated to segment the kidney, renal tumor and abdominal wall simutanesoualy. Each layer is linked with an extended U-Net for segmentation. The abdominal wall segmentation is at the top of them, followed by the kidney segmentation, and the renal tumor is at the bottom. This stratification is derived of a learning process of spatial constraints. For the extended U-Net, the dense connection is reflected in the convolution block of the coding layer or coding and decoding layer-between same layer, as well as upper and lower-between layers. This kind of dense connection at the three levels can be used to obtain more semantic connections and transmit more information in the training, and overall gradient flow can be effectively enhanced, and the global optimal solution can be sorted out smoothly. To alleviate the loss of redundant texture detail in the up-sampling process, the sub-pixel convolution mode is used further. This method proposed can generate higher resolution images through the pixel order-related intervention of multiple low resolution feature images. At the same time, image mode-medical attention mechanism is used to preserve the accuracy of small volume tumor segmentation. Then, the misjudged renal tumors are removed in terms of radiomics features, which are high-dimensional non-invasive image biomarkers and beneficial to mine, quantify and analyze the deep-seated features of naked eye-related unrecognized malignant tumors. In this study, seven groups of radiomics features are calculated, including such features in relevance to gray level coocurrence matrix (GLCM), square-statistical contexts, gradient, moment, run length (RL), boundary, and wavelet features. Finally, segmentation analysis-based international standard MAP score and R.E.N.A.L score are used to evaluate and calculate the kidney and renal tumor automatically, and the surgical dilemma of enucleation of renal tumor is located further.
Result
2
The synchronized results of segmentation of kidney, renal tumor and abdominal wall are evaluated. The performance indicators like dice coefficient (DC), positive predicted value (PPV) and sensitivity are illustrated. The sensitivity, PPV and Dice are 0.1, 0.08 and 0.09 higher than the worst U-Net++, 0.04, 0.04 and 0.05 higher than the better BlendMask. The highest values of sensitivity, PPV and Dice can be reached to 0.97, 0.98 and 0.98. To remove false-positive tumor areas effectively, the binary classification model is adopted as well. The random forest (RF) machine learning method is used, because the average performance of RF in various test samples can be optimal to a certain extent. The five-fold cross validation accuracy of RF is 0.95 (±0.03), and the area under curve (AUC) value is 0.99, which is much higher than other related classification methods. For the MAP and R.E.N.A.L scoring experiments, the results of all 5 times can keep consistent with the evaluation results of two experts-least for P, L and R values; For N and E values, four of the five results are in consistency in related to the evaluation results of least two experts. It can be seen that the scoring ability of automatic individual items has been very close to the focused experts. The final operation difficulty evaluation results are compared with the three medical experts in the urology department of class A tertiary hospital. The average results-predictable automation method can demonstrate its potential consistency in relevance to the expert evaluation level as a whole.
Conclusion
2
We facilitate an accurate and reliable decision for accurate preoperative diagnosis, individualized planning of surgical scheme and surgical approach selection. Furthermore, our method proposed can be integrated into the medical-relevant image cloud platform to provide intelligent and optimal medical solutions.
肾肿瘤剜除术医学图像分割影像组学深度学习手术评估
enucleation of renal tumormedical image segmentationradiomicsdeep learningsurgical evaluation
Aerts H J W L, Velazquez E R, Leijenaar R T H, Parmar C, Grossmann P, Carvalho S, Bussink J, Monshouwer R, Haibe-Kains B, Rietveld D, Hoebers F, Rietbergen M M, Leemans C R, Dekker A, Quackenbush J, Gillies R J and Lambin P. 2014. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature Communications, 5: #4006 [DOI: 10.1038/ncomms5006http://dx.doi.org/10.1038/ncomms5006]
Afshar P, Mohammadi A, Plataniotis K N, Oikonomou A and Benali H. 2019. From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities. IEEE Signal Processing Magazine, 36(4): 132-160 [DOI: 10.1109/MSP.2019.2900993http://dx.doi.org/10.1109/MSP.2019.2900993]
Chen H, Sun K Y, Tian Z, Shen C H, Huang Y M and Yan Y L. 2020. BlendMask: top-down meets bottom-up for instance segmentation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 8570-8578 [DOI: 10.1109/CVPR42600.2020.00860http://dx.doi.org/10.1109/CVPR42600.2020.00860]
Chen L C, Zhu Y K, Papandreou G, Schroff F and Adam H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 833-851 [DOI: 10.1007/978-3-030-01234-2_49http://dx.doi.org/10.1007/978-3-030-01234-2_49]
Çiçek Ö, Abdulkadir A, Lienkamp S S, Brox T and Ronneberger O. 2016. 3D U-Net: learning dense volumetric segmentation from sparse annotation//Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens, Greece: Springer: 424-432 [DOI: 10.1007/978-3-319-46723-8_49http://dx.doi.org/10.1007/978-3-319-46723-8_49]
Davidiuk A J, Parker A S, Thomas C S, Leibovich B C, Castle E P, Heckman M G, Custer K and Thiel D D. 2014. Mayo adhesive probability score: an accurate image-based scoring system to predict adherent perinephric fat in partial nephrectomy. European Urology, 66(6): 1165-1171 [DOI: 10.1016/j.eururo.2014.08.054http://dx.doi.org/10.1016/j.eururo.2014.08.054]
Fatemeh Z, Nicola S, Satheesh K and Eranga U. 2020. Ensemble U‐Net‐based method for fully automated detection and segmentation of renal masses on computed tomography images. Medical Physics, 47(9): 4032-4044 [DOI: 10.1002/mp.14193http://dx.doi.org/10.1002/mp.14193]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 2672-2680
Guo J N, Zeng W, Yu S and Xiao J Q. 2021. RAU-Net: U-Net model based on residual and attention for kidney and kidney tumor segmentation//Proceedings of 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE). Guangzhou, China: IEEE: 353-356 [DOI: 10.1109/ICCECE51280.2021.9342530http://dx.doi.org/10.1109/ICCECE51280.2021.9342530]
He K M, Gkioxari G, Dollr P and Girshick R. 2017. Mask R-CNN//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2961-2969 [DOI: 10.1109/ICCV.2017.322http://dx.doi.org/10.1109/ICCV.2017.322]
He Y T, Ge R J, Qi X M, Yang G Y, Chen Y, Kong Y Y, Shu H Z, Coatrieux J L and Li S. 2021. EnMcGAN: adversarial ensemble learning for 3D complete renal structures segmentation//Proceedings of the 27th International Conference on Information Processing in Medical Imaging. Copenhagen, Denmark: Springer: 465-477 [DOI: 10.1007/978-3-030-78191-0_36http://dx.doi.org/10.1007/978-3-030-78191-0_36]
Huang G, Liu Z, Van Der Maaten L and Weinberger K O. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2261-2269 [DOI: 10.1109/CVPR.2017.243http://dx.doi.org/10.1109/CVPR.2017.243]
Isola P, Zhu J Y, Zhou T H and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 5967-5976 [DOI: 10.1109/CVPR.2017.632http://dx.doi.org/10.1109/CVPR.2017.632]
Kang L, Zhou Z Q, Huang J J and Han W Z. 2022. Renal tumors segmentation in abdomen CT images using 3D-CNN and ConvLSTM. Biomedical Signal Processing and Control, 72: #103334 [DOI: 10.1016/j.bspc.2021.103334http://dx.doi.org/10.1016/j.bspc.2021.103334]
Kutikov A and Uzzo R G. 2009. The R.E.N.A.L. nephrometry score: a comprehensive standardized system for quantitating renal tumor size, location and depth. Journal of Urology, 182(3): 844-853 [DOI: 10.1016/j.juro.2009.05.035http://dx.doi.org/10.1016/j.juro.2009.05.035]
Lambin P, Leijenaar R T H, Deist T M, Peerlings J, de Jong E E C, van Timmeren J, Sanduleanu S, Larue R T H M, Even A J G, Jochems A, van Wijk Y, Woodruff H, van Soest J, Lustberg T, Roelofs E, van Elmpt W, Dekker A, Mottaghy F M, Wildberger J E and Walsh S. 2017. Radiomics: the bridge between medical imaging and personalized medicine. Nature Reviews Clinical Oncology, 14(12): 749-762 [DOI: 10.1038/nrclinonc.2017.141http://dx.doi.org/10.1038/nrclinonc.2017.141]
Milletari F, Navab N and Ahmadi S A. 2016. U-Net: fully convolutional neural networks for volumetric medical image segmentation//Proceedings of the 4th International Conference on 3D Vision. Stanford, USA: IEEE: 565-571 [DOI: 10.1109/3DV.2016.79http://dx.doi.org/10.1109/3DV.2016.79]
Oktay O, Schlemper J, Le Folgoc L, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla N Y, Kainz B, Glocker B and Rueckert D. 2018. Attention U-Net: learning where to look for the pancreas [EB/OL]. [2022-02-01]. https://arxiv.org/pdf/1804.03999.pdfhttps://arxiv.org/pdf/1804.03999.pdf
Parekh V S and Jacobs M A. 2019. Radiomic synthesis using deep convolutional neural networks//Proceedings of the 16th IEEE International Symposium on Biomedical Imaging. Venice, Italy: IEEE: 1114-1117 [DOI: 10.1109/ISBI.2019.8759491http://dx.doi.org/10.1109/ISBI.2019.8759491]
Qin T X, Wang Z Y, He K L, Shi Y H, Gao Y and Shen D G. 2020. Automatic data augmentation via deep reinforcement learning for effective kidney tumor segmentation//Proceedings of 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Barcelona, Spain: IEEE: 1419-1423 [DOI: 10.1109/ICASSP40776.2020.9053403http://dx.doi.org/10.1109/ICASSP40776.2020.9053403]
Rezaei M, Yang H J and Meinel C. 2020. Recurrent generative adversarial network for learning imbalanced medical image semantic segmentation. Multimedia Tools and Applications, 79(21): 15329-15348 [DOI: 10.1007/s11042-019-7305-1http://dx.doi.org/10.1007/s11042-019-7305-1]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241 [DOI: 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Ruan Y N, Li D W, Marshall H, Miao T, Cossetto T, Chan I, Daher O, Accorsi F, Goela A and Li S. 2020a. MB-FSGAN: joint segmentation and quantification of kidney tumor on CT by the multi-branch feature sharing generative adversarial network. Medical Image Analysis, 64: #101721 [DOI: 10.1016/j.media.2020.101721http://dx.doi.org/10.1016/j.media.2020.101721]
Ruan Y N, Li D W, Marshall H, Miao T, Cossetto T, Chan I, Daher O, Accorsi F, Goela A and Li S. 2020b. Mt-UcGAN: multi-task uncertainty-constrained GAN for joint segmentation, quantification and uncertainty estimation of renal tumors on CT//Proceedings of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention. Lima, Peru: Springer: 439-449 [DOI: 10.1007/978-3-030-59719-1_43http://dx.doi.org/10.1007/978-3-030-59719-1_43]
Shelhamer E, Long J and Darrell T. 2017. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4): 640-651 [DOI: 10.1109/TPAMI.2016.2572683http://dx.doi.org/10.1109/TPAMI.2016.2572683]
Shi W Z, Caballero J, Huszr F, Totz J, Aitken A P, Bishop R, Rueckert D and Wang Z H. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 1874-1883 [DOI: 10.1109/CVPR.2016.207http://dx.doi.org/10.1109/CVPR.2016.207]
Shi Y G, Qian M Y and Liu Z W. 2017. Renal cortex segmentation with fully convolutional network and GrowCut. Journal of Image and Graphics, 22(10): 1418-1427
时永刚, 钱梦瑶, 刘志文. 2017. 结合全卷积网络和GrowCut的肾皮质分割算法. 中国图象图形学报, 22(10): 1418-1427 [DOI: 10.11834/jig.170190http://dx.doi.org/10.11834/jig.170190]
Yan X, Yuan K, Zhao W B, Wang S, Li Z and Cui S G. 2020. An efficient hybrid model for kidney tumor segmentation in CT images//Proceedings of the 17th IEEE International Symposium on Biomedical Imaging (ISBI). Iowa City, USA: IEEE: 333-336 [DOI: 10.1109/ISBI45749.2020.9098325http://dx.doi.org/10.1109/ISBI45749.2020.9098325]
Yang D, Xiong T, Xu D G and Zhou S K. 2020. Segmentation using adversarial image-to-image networks//Zhou S K, Rueckert D and Fichtinger G, eds. Handbook of Medical Image Computing and Computer Assisted Intervention. London, UK: Academic Press: 165-182 [DOI: 10.1016/B978-0-12-816176-0.00012-0http://dx.doi.org/10.1016/B978-0-12-816176-0.00012-0]
Yang E, Kim C K, Guan Y, Koo B B and Kim J H. 2022. 3D multi-scale residual fully convolutional neural network for segmentation of extremely large-sized kidney tumor. Computer Methods and Programs in Biomedicine, 215: #106616 [DOI: 10.1016/j.cmpb.2022.106616http://dx.doi.org/10.1016/j.cmpb.2022.106616]
Yang J C, Fang R Y, Ni B B, Li Y M, Xu Y and Li L G. 2019. Probabilistic radiomics: ambiguous diagnosis with controllable shape analysis//Proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention. Shenzhen, China: Springer: 658-666 [DOI: 10.1007/978-3-030-32226-7_73http://dx.doi.org/10.1007/978-3-030-32226-7_73]
Yu Q, Shi Y H, Sun J Q, Gao Y, Zhu J B and Dai Y K. 2019. Crossbar-net: a novel convolutional neural network for kidney tumor segmentation in CT images. IEEE Transactions on Image Processing, 28(8): 4060-4074 [DOI: 10.1109/TIP.2019.2905537http://dx.doi.org/10.1109/TIP.2019.2905537]
Yu Z M, Pang S C, Du A N, Orgun M A, Wang Y and Lin H. 2020. Fine-grained tumor segmentation on computed tomography slices by leveraging bottom-up and top-down strategies//Proceedings of SPIE 11313, Medical Imaging 2020: Image Processing. Houston, USA: SPIE: #113130E [DOI: 10.1117/12.2550511http://dx.doi.org/10.1117/12.2550511]
Zhou T, Dong Y L, Huo B Q, Liu S and Ma Z J. 2021. U-Net and its applications in medical image segmentation: a review. Journal of Image and Graphics, 26(9): 2058-2077
周涛, 董雅丽, 霍兵强, 刘珊, 马宗军. 2021. U-Net网络医学图像分割应用综述. 中国图象图形学报, 26(9): 2058-2077 [DOI: 10.11834/jig.200704http://dx.doi.org/10.11834/jig.200704]
Zhou Z W, Siddiquee M R, Tajbakhsh N and Liang J M. 2018. UNet++: a nested U-Net architecture for medical image segmentation//Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis. Granada, Spain: Springer: 3-11 [DOI: 10.1007/978-3-030-00889-5_1http://dx.doi.org/10.1007/978-3-030-00889-5_1]
相关作者
相关机构