周围神经MicroCT图像中神经束轮廓获取
Generic approach to obtain contours of fascicular groups from MicroCT images of the peripheral nerve
- 2020年25卷第2期 页码:354-365
收稿:2019-06-06,
修回:2019-9-3,
录用:2019-9-10,
纸质出版:2020-02-16
DOI: 10.11834/jig.190243
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-06-06,
修回:2019-9-3,
录用:2019-9-10,
纸质出版:2020-02-16
移动端阅览
目的
2
采用不同染色方法获得的周围神经标本经过MicroCT扫描后
会获得不同效果的神经断层扫描图像。本文针对饱和氯化钙染色、无染色方法获得的两种周围神经MicroCT图像
提出一种通用的方法
实现不同染色方法获得的周围神经MicroCT图像在统一架构下的神经束轮廓获取。
方法
2
首先设计通用方法架构
构建图像数据集
完成图像标注、分组等关键性的准备环节。然后将迁移学习算法与蒙皮区域卷积神经网络(mask R-CNN)算法结合起来
设计通用方法中的识别模型。最后设计多组实验
采用不同分组的图像数据集对通用方法进行训练、测试
以验证通用方法的效果。
结果
2
通用方法对不同分组的图像数据集的神经束轮廓获取准确率均超过95%
交并比均在86%以上
检测时间均小于0.06 s。此外
对于神经束轮廓信息较复杂的图像数据集
迁移学习结合mask R-CNN的识别模型与纯mask R-CNN的识别模型相比较
准确率和交并比分别提高了5.5% 9.8%和2.4% 3.2%。
结论
2
实验结果表明
针对不同染色方法获得的周围神经MicroCT图像
采用本文方法可以准确、快速、全自动获取得到神经束轮廓。此外
经过迁移后的mask R-CNN能显著提高神经束轮廓获取的准确性和鲁棒性。
Objective
2
Peripheral nerve injury can result in severe paralysis and dysfunction. For a long time
repairing and regenerating injured peripheral nerves have been urgent goals. Detailed intraneural spatial information can be provided by the 3D visualization of fascicular groups in the peripheral nerve. Suitable surgical methods must be selected to repair clinical peripheral nerve defects. The contour information of peripheral nerve MicroCT image is the basis of peripheral nerve 3D reconstruction and visualization. Obtaining the contour information of the fascicular groups is a key step during 3D nerve visualization. In the previous research
the MicroCT images of peripheral nerve were obtained. The foreground and background of these images were relatively different when the images came from samples stained by different methods
such as dyed or not dyed with calcium chloride. If previous segmentation approaches were used to extract the contours of fascicular groups
then various labor-intensive feature extracting and recognition methods had to be applied
and the results were inconsistent. An assistance methodology using image processing can improve the accuracy of obtaining the contour information of fascicular groups. Thus
this study analyzes graph cut theory and algorithm and proposes a generic framework to obtain numerous consistent results easily. The proposed algorithm can be used to assist in neurosurgery diagnosis and has great clinical application value. In the generic framework
the MicroCT images from different dyed samples are processed by the same algorithm
which results in consistent and accurate extracted contours of fascicular groups.
Method
2
This method is based on deep learning. The method can automatically extract instinct features from image data and instantly analyze images
can effectively improve detection efficiency
and can be applied to complex images. Mask region convolution neural network (mask R-CNN) is used to extract the contour information from peripheral nerve MicroCT images. Given the impressive achievement of Mask R-CNN for object segmentation
the accuracy of recognition and classification at the pixel level is greatly improved. First
the structure of the generic framework is designed
and the image datasets are constructed. Several key preparations are performed
such as image annotation and grouping. The dataset of images is divided into two groups at a ratio of 3:1
namely
training and test sets. On the basis of the dyed method of the MicroCT images from peripheral nerve
the training and test datasets have three subsets
namely
calcium chloride-dyed image dataset (subset 1)
nondyed image dataset (subset 2)
and mixed image dataset (subset 3). Second
the principle of mask R-CNN is analyzed
and the generic frameworks of image classification and segmentation are designed
combining mask R-CNN with transfer learning. Even though mask R-CNN is efficient in common segmentation task
it has several limitations. In normal task
mask R-CNN often needs many images to train. However
the datasets of the MicroCT images of peripheral nerves do not have sufficient number of images to train mask R-CNN. Thus
mask R-CNN cannot be used to extract the contour of fascicular groups directly from the MicroCT images of peripheral nerves. Therefore
the transfer learning strategy is combined with mask R-CNN to solve the problem. The training parameters of neural network structure are adjusted manually. mask R-CNN is pretrained by the COCO dataset. mask R-CNN is transferred to the dataset of peripheral nerves for further learning to improve the extracting accuracy of the contour of the fascicular groups. Finally
the target segmentation model based on the contour information of peripheral nerve MicroCT image tasks is constructed. Third
the generic framework is trained and testified by image datasets from different groups. This finding indicates that a highly effective segmentation effect can be achieved.
Result
2
All experiments are accomplished in the GTX1070-8G environment. The experimental data are derived from a 5 cm peripheral nerve segment. The peripheral nerve sample is transected into 3 mm segments at -20℃. These segments are used for calcium chloride-dyed and nondyed methods to facilitate the discrimination of different fascicular groups in MicroCT images. The scan sequence from the calcium chloride-dyed method includes 228 images. The scan sequence from the nondyed method includes 523 images. The training dataset in each dataset is used to train the parameters of the neural network structure
and the test dataset is used to test the actual segmentation results. The model execution took 15 000 iterations for the training process. Experiment results show that the pixel average precision of all types of datasets exceeds 95%
and the segmentation accuracy is high
which is highly consistent with the results of artificial segmentation. The intersection over union exceeds 86%
and the mean time to detect is less than 0.06 s
which satisfies the real-time requirements. In comparison with the original mask R-CNN
the proposed framework achieved improved performance
increased average precision by approximately 5.5%-9.8%
and increased intersection over union by approximately 2.4%3.2%.
Conclusion
2
Theoretical analysis and experiment results justify the feasibility of the proposed framework. On the basis of the experimental dataset
our training set is relatively limited
but the experiments show that the proposed approach can accurately
rapidly
and automatically extract the contours of fascicular groups. Furthermore
the accuracy
segmentation effect
and robustness are improved greatly when mask R-CNN is combined with transfer learning. The framework can be widely used to segment various MicroCT images
which come from different dyed samples. The auto-mated segmentation of the MicroCT images of peripheral nerves has substantial clinical values. Finally
we discuss the challenges in this generic framework and several unsolved problems.
Chen X L, Fang H, Lin T Y, Vedantam R, Gupta S, Dollar P and Zitnick C L. 2015. Microsoft COCO captions: data collection and evaluation server.[2019-05-22]. https://arxiv.org/pdf/1504.00325.pdf https://arxiv.org/pdf/1504.00325.pdf
Feichtenhofer C, Pinz A and Zisserman A. 2017. Detect to track and track to detect//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3057-3065[ DOI: 10.1109/ICCV.2017.330 http://dx.doi.org/10.1109/ICCV.2017.330 ]
Giraldo J J,álvarez M A and Orozco AÁ. 2015. Peripheral nerve segmentation using nonparametric Bayesian hierarchical clustering//Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Milan, Italy: IEEE: 3101-3104[ DOI: 10.1109/EMBC.2015.7319048 http://dx.doi.org/10.1109/EMBC.2015.7319048 ]
Girshick R. 2015. Fast R-CNN//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 1440-1448[ DOI: 10.1109/ICCV.2015.169 http://dx.doi.org/10.1109/ICCV.2015.169 ]
Girshick R, Donahue J, Darrell T and Malik J. 2016. Region-based convolutional networks for accurate object detection and segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1):142-158[DOI:10.1109/TPAMI.2015.2437384]
Gómez-Ríos A, Tabik S, Luengo J, Shihavuddin A S M, Krawczyk B and Herrera F. 2019. Towards highly accurate coral texture images classification using deep convolutional neural networks and data augmentation. Expert Systems with Applications, 118:315-328[DOI:10.1016/j.eswa.2018.10.010]
He K M, Gkioxari G, Dollar P and Girshick R. 2018. Mask R-CNN[EB/OL].[2019-05-22] . https://arxiv.org/pdf/1703.06870.pdf https://arxiv.org/pdf/1703.06870.pdf
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE: 770-778[ DOI: 10.1109/CVPR.2016.90 http://dx.doi.org/10.1109/CVPR.2016.90 ]
Hoffman J, Gupta S, Leong J, Guadarrama S and Darrell T. 2016. Cross-modal adaptation for RGB-D detection//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 5032-5039[ DOI: 10.1109/ICRA.2016.7487708 http://dx.doi.org/10.1109/ICRA.2016.7487708 ]
Huang L, Yang Y, Wang Q J, Guo F and Gao Y. 2019. Indoor scene segmentation based on fully convolutional neural networks. Journal of Image and Graphics, 24(1):64-72
黄龙, 杨媛, 王庆军, 郭飞, 高勇. 2019.结合全卷积神经网络的室内场景分割.中国图象图形学报, 24(1):64-72[DOI:10.11834/jig.180364]
Hui T, Abudureyimu H and Du H. 2019. Multi-type flame detection combined with Faster R-CNN. Journal of Image and Graphics, 24(1):73-83
回天, 哈力旦·阿布都热依木, 杜晗. 2019.结合Faster R-CNN的多类型火焰检测.中国图象图形学报, 24(1):73-83[DOI:10.11834/jig.180430]
Johnson J W. 2018. Adapting mask-RCNN for automatic nucleus segmentation[EB/OL].[2019-05-22] . https://arxiv.org/pdf/1805.00500.pdf https://arxiv.org/pdf/1805.00500.pdf
Kagemann L, Isikawa H, Wollstein G, Gabriele M and Schuman J S. 2009. Visualization of 3D high speed ultrahigh resolution optical coherence tomographic data identifies structures visible in 2D frames. Optics Express, 17(5):4208-4220[DOI:10.1364/OE.17.004208]
Kang K, Li H, Yan J, Zeng X, Yang B, Xiao T, Zhang C, Wang Z, Wang R, Wang X and Ouyang W. T-CNN: Tubelets with convolutional neural networks for object detection from videos. IEEE Transactions on Circuits and Systems for Video Technology PP(99) (2017) 1-1.
Li Q, Bai Z Y and Liu Y F. 2018. Automated classification of diabetic retinal images by using deep learning method. Journal of Imageand Graphics, 23(10):1594-1603
李琼, 柏正尧, 刘莹芳. 2018.糖尿病性视网膜图像的深度学习分类方法.中国图象图形学报, 23(10):1594-1603[DOI:10.11834/jig.170683]
Lin T Y, Dollár P, Girshick R, He K M, Hariharan B and Belongie S. 2017a. Feature pyramid networks for object detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE: 936-944[ DOI: 10.1109/CVPR.2017.106 http://dx.doi.org/10.1109/CVPR.2017.106 ]
Lin T Y, Goyal P, Girshick R, He K M and Dollár P. 2017b. Focal loss for dense object detection//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2999-3007[ DOI: 10.1109/ICCV.2017.324 http://dx.doi.org/10.1109/ICCV.2017.324 ]
Liu B, Zhou X L, Zhang M and Zhu X C. 2016. A study on peripheral nerve image segmentation algorithm. Journal of Changshu Institute of Technology, 30(2):64-68
刘斌, 周学礼, 张敏, 朱鑫晨. 2016.周围神经图像分割算法研究.常熟理工学院学报, 30(2):64-68[DOI:10.3969/j.issn.1008-2794.2016.02.015]
Liu J and Li P F. 2018. A mask R-CNN model with improved regionproposal network for medical ultrasound image//Huang D S, Jo K H and Zhang X L, eds. Intelligent Computing Theories and Application. Cham: Springer: 26-33[ DOI: 10.1007/978-3-319-95933-7_4 http://dx.doi.org/10.1007/978-3-319-95933-7_4 ]
Long M S, Wang J M, Cao Y, Sun J G and Yu P S. 2016. Deep learning of transferable representation for scalable domain adaptation. IEEE Transactions on Knowledge and Data Engineering, 28(8):2027-2040[DOI:10.1109/TKDE.2016.2554549]
Moiseev D, Hu B and Li J. 2019. Morphometric analysis of peripheral myelinated nerve fibers through deep learning. Journal of the Peripheral Nervous System, 24(1):87-93[DOI:10.1111/jns.12293]
Nguyen D H, Le H T, Tran T H, Vu H, Le T L and Doan H G. 2018. Hand segmentation under different viewpoints by combination of Mask R-CNN with tracking//Proceedings of the 5th Asian Conference on Defense Technology. Hanoi, Vietnam: IEEE: 14-20[ DOI: 10.1109/ACDT.2018.8593130 http://dx.doi.org/10.1109/ACDT.2018.8593130 ]
Oquab M, Bottou L, Laptev I and Sivic J. 2014. Learning and transferring mid-level image representations using convolutional neural networks//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE: 1717-1724[ DOI: 10.1109/CVPR.2014.222 http://dx.doi.org/10.1109/CVPR.2014.222 ]
Pan S J and Yang Q. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359[DOI:10.1109/TKDE.2009.191]
Patravali J, Jain S and Chilamkurthy S. 2018. 2D-3D fully convolutional neural networks for cardiac MR segmentation//Proceedings of the 8th International Workshop on Statistical Atlases and Computational Models of the Heart. Quebec City, Canada: Springer: 130-139[ DOI: 10.1007/978-3-319-75541-0_14 http://dx.doi.org/10.1007/978-3-319-75541-0_14 ]
Qi J, Wang W Y, Zhong Y C, Zhou J M, Luo P, Tang P, He C F, Zhu S, Liu X L and Zhang Y. 2018. Three-dimensional visualization of the functional fascicular groups of a long-segment peripheral nerve. Neural Regeneration Research, 13(8):1465-1470[DOI:10.4103/1673-5374.235307]
Ren G H and Pei G X. 2009. Research progress on three-dimensional reconstruction and visualization of peripheral nerve. Chinese Journal of Reparative and Reconstructive Surgery, 23(2):239-244
任高宏, 裴国献. 2009.周围神经3维重建与可视化研究进展.中国修复重建外科杂志, 23(2):239-244
Ren S Q, He K M, Girshick R and Sun J. 2017. Faster R-CNN:towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137-1149[DOI:10.1109/TPAMI.2016.2577031]
Sui T and Cao X J. 2016. Identification of motor and sensory fiber during surgical repair of peripheral nerve injury:the situation and prospect. Chinese Journal of Experimental Surgery, 33(11):2438-2441
眭涛, 曹晓建. 2016.周围神经损伤修复术中运动束与感觉束鉴别的现状及展望.中华实验外科杂志, 33(11):2438-2441[DOI:10.3760/cma.j.issn.1001-9030.2016.11.002]
Tajbakhsh N, Shin J Y, Gurudu S R, Hurst R T, Kendall C B, Gotway M B and Liang J M. 2016. Convolutional neural networks for medical image analysis:full training or fine tuning? IEEE Transactions on Medical Imaging, 35(5):1299-1312[DOI:10.1109/TMI.2016.2535302]
Vuola A O, Akram S U and Kannala J. 2019. Mask-RCNN and U-net ensembled for nuclei segmentation//Proceedings of the 16th IEEE International Symposium on Biomedical Imaging. Venice, Italy: IEEE: 208-212[ DOI: 10.1109/ISBI.2019.8759574 http://dx.doi.org/10.1109/ISBI.2019.8759574 ]
Yan L W, Guo Y Z, Qi J, Zhu Q T, Gu L Q, Zheng C B, Lin T, Yu Y T, Zeng Z T, Yu S, Zhu S, Zhou X, Zhang X, Du Y F, Yao Z, Lu Y and Liu X L. 2017. Iodine and freeze-drying enhanced high-resolution MicroCT imaging for reconstructing 3D intraneural topography of human peripheral nerve fascicles. Journal of Neuroscience Methods, 287:58-67[DOI:10.1016/j.jneumeth.2017.06.009]
Zhang X S, Zhuang Y, Yan F and Wang W. 2019. Research and development of class-level object recognition and detection based on transfer learning. Acta Automatica Sinica, 45(7):1224-1243
张雪松, 庄严, 闫飞, 王伟. 2019.基于迁移学习的类别级物体识别与检测研究与进展.自动化学报, 45(7):1224-1243[DOI:10.16383/j.aas.c180093]
Zhong Y C and Luo P. 2012. Type recognition of fascicular groups from nerve slice image. Journal of Image and Graphics, 17(1):82-89
钟映春, 罗鹏. 2012.从神经切片图像中识别功能束类型的研究.中国图象图形学报, 17(1):82-89[DOI:10.11834/jig.20120112]
Zhong Y C, Qi J, Liu X L and Zhang M. 2011. Study on discrete nerve fascicular groups edge extraction from slice image. Journal of System Simulation, 23(7):1414-1418
钟映春, 戚剑, 刘小林, 张淼. 2011.从图像中提取离散点状神经功能束边缘的研究.系统仿真学报, 23(7):1414-1418[DOI:10.16182/j.cnki.joss.2011.07.038]
Zhong Y C, Wang L P, Dong J H, Zhang Y, Luo P, Qi J, Liu X L and Xian C J. 2015. Three-dimensional reconstruction of peripheral nerve internal fascicular groups. Scientific Reports, 5:17168[DOI:10.1038/srep17168]
Zhu S, Zhu Q T, Liu X L, Yang W H, Jian Y T, Zhou X, He B, Gu L Q, Yan L W, Lin T, Xiang J P and Qi J. 2016. Three-dimensional reconstruction of the microstructure of human acellular nerve allograft. Scientific Reports, 6:30694[DOI:10.1038/srep30694]
Zhu X Z, Dai J F, Yuan L and Wei Y C. 2018. Towards high performance video object detection//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE: 7210-7218[ DOI: 10.1109/CVPR.2018.00753 http://dx.doi.org/10.1109/CVPR.2018.00753 ]
相关作者
相关机构
京公网安备11010802024621