深度迭代融合的脑部磁共振图像颅骨去除网络
Deep iterative fusion network on skull removal of brain magnetic resonance images
- 2020年25卷第10期 页码:2171-2181
收稿:2020-05-29,
修回:2020-6-29,
录用:2020-7-5,
纸质出版:2020-10-16
DOI: 10.11834/jig.200218
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-05-29,
修回:2020-6-29,
录用:2020-7-5,
纸质出版:2020-10-16
移动端阅览
目的
2
去除颅骨是脑部磁共振图像处理和分析中的重要环节。由于脑部组织结构复杂以及采集设备噪声的影响导致现有方法不能准确分割出脑部区域,为此提出一种深度迭代融合的卷积神经网络模型实现颅骨的准确去除。
方法
2
本文DIFNet(deep iteration fusion net)模型的主体结构由编码器和解码器组成,中间的跳跃连接方式由多个上采样迭代融合构成。其中编码器由残差卷积组成,以便浅层语义信息更容易流入深层网络,避免出现梯度消失的现象。解码器网络由双路上采样模块构成,通过具有不同感受野的反卷积操作,将输出的特征图相加后作为模块输出,有效还原更多细节上的特征。引入带有L2正则的Dice损失函数训练网络模型,同时采用内部数据增强方法,有效提高模型的鲁棒性和泛化能力。
结果
2
为了验证本文模型的分割性能,分别利用两组数据集与传统分割算法和主流的深度学习分割模型进行对比。在训练数据集同源的NFBS(neurofeedback skull-stripped)测试数据集上,本文方法获得了最高的平均Dice值和灵敏度,分别为99.12%和99.22%。将在NFBS数据集上训练好的模型直接应用于LPBA40(loni probabilistic brain atlas 40)数据集,本文模型的Dice值可达98.16%。
结论
2
本文提出的DIFNet模型可以快速、准确地去除颅骨,相比于主流的颅骨分割模型,精度有较高提升,并且模型具有较好的鲁棒性和泛化能力。
Objective
2
Magnetic resonance imaging (MRI) is frequently used in clinical applications. It is a common means to detect lesions
injuries
and soft tissue variations in neural system diseases. Skull removal is an important preprocessing step for brain magnetic resonance (MR) image analysis. Its purpose is to remove nonbrain tissue from the brain MRI
thereby facilitating subsequent extraction and analysis of brain tissue. The MR images acquired using clinical scanners inevitably have blurring or noise characteristics due to the complexity of brain tissue structure and the effects of equipment noise and field offset. Differences also exist in the anatomical structure of the brain tissue for different individuals
which cause difficulties in the skull segmentation in brain MR images. Most traditional methods for skull segmentation are incompletely automatic and often require the operator to use the mouse and other tools to determine the center point of the region of interest and adjust the parameters manually. The current automatic skull segmentation method does not require human-computer interaction but has poor adaptability
and satisfactory segmentation results in different MR images are difficult to achieve. On the contrary
the deep learning-based method exhibits advanced performance in multiple segmentation tasks in the field of computer vision. Therefore
we propose a deep iterative fusion convolutional neural network model (DIFNet) in this work to realize skull segmentation.
Method
2
The main structure of DIFNet is composed of an encoder and a decoder. The skip connection between the encoder and decoder is realized by multiple upsampling iterative fusion
which means that the input information of one decoder layer comes from not only the same layer but also the deep layers of the encoder. The encoder consists of several residual convolution blocks
which allow the shallow semantic information to flow into deep networks to avoid gradient vanishment. The decoder is composed of double-way upsampling modules. The feature maps generated from the double-way upsampling modules are added as real outputs through deconvolution operations with different receptive field sizes. This process enables to restore the image details effectively by adding multiple scale information. The internal data enhancement method is adopted to enhance the generalization capability of the model. First
the image is randomly scaled
in which the interval of scaling factor sets is determined in accordance with the ratio of the original image size to the output block size. Then
a center point is randomly selected in the scaled image
and the cutting area is determined. Lastly
the cut image patches are fed into the network for training. The Dice loss function embedded with an L2 regularization item is used to optimize the model parameters and overcome the overfitting problem. We use two datasets in this work to evaluate the accuracy and robustness of the proposed model. Each dataset has a brain segmentation mask provided by a professional doctor as the gold standard of the model. One dataset is NFBS(neurofeed back skullstripped)
from which a part of images are used for testing (the ratio of the training dataset to the test dataset is 4 :1). The other dataset is LPBA40(loni probabilistic brain atlas 40)
which is used as an independent dataset for testing the generality of the models. For quantitative analysis
the Dice score
sensitivity
and specificity are used in this work.
Result
2
For the NFBS dataset
the method in this paper obtains the highest average Dice score and sensitivity of 99.12% and 99.22%
respectively
compared with U-Net
U-Net with residual block (Res-U-Net)
and U-Net with double-way upsampling modules (UP-U-Net). The Dice score is increased by 1.88%
1.81%
and 0.6%. The sensitivity and septicity are increased by at least 0.5% compared with the U-Net model. The segmentation results of the model are similar to the manual segmentation results of experts. The model trained with the NFBS dataset is applied directly to the LPBA40 dataset to verify the segmentation capability of the model. The Dice value obtained in the test experiment is up to 98.16%. By contrast
the Dice values of U-Net
UP-U-Net
and Res-U-Net are 81.69%
77.34%
and 76.42%
respectively. Compared with these models
our proposed model is robust.
Conclusion
2
Experiments illustrate that the internal data augmentation and deep iterative fusion make the proposed model be easily trained and acquire the best segmentation results. The deep iterative feature fusion can guarantee the robustness of the segmentation model.
Archibald R, Chen K W, Gelb A and Renaut R. 2003. Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method. NeuroImage, 20(1):489-502[DOI:10.1016/s1053-8119(03)00260-x]
Han X, Kwitt R, Aylward S, Bakas S, Menze B, Asturias A, Vespa P, Van Horn J and Niethammer M. 2018. Brain extraction from normal and pathological images:a joint PCA/image-reconstruction approach. NeuroImage, 176:431-445[DOI:10.1016/j.neuroimage.2018.04.073]
Huang A, Abugharbieh R, Tam R and Traboulsee A. 2006. MRI brain extraction with combined expectation maximization and geodesic active contours//2006 IEEE International Symposium on Signal Processing and Information Technology. Vancouver: IEEE: 107-111[ DOI: 10.1109/ISSPIT.2006.270779 http://dx.doi.org/10.1109/ISSPIT.2006.270779 ]
Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE: 2261-2269[ DOI: 10.1109/CVPR.2017.243 http://dx.doi.org/10.1109/CVPR.2017.243 ]
Hwang H, Rehman H Z U and Lee S. 2019.3D U-Net for skull stripping in brain MRI. Applied Sciences, 9(3):569[DOI:10.3390/app9030569]
Iglesias J E, Liu C Y, Thompson P M and Tu Z W. 2011. Robust brain extraction across datasets and comparison with publicly available methods. IEEE Transactions on Medical Imaging, 30(9):1617-1634[DOI:10.1109/TMI.2011.2138152]
Jégou S, Drozdzal M, Vazquez D, Romero A and Bengio Y. 2017. The one hundred layers tiramisu: fully convolutional denseNets for semantic segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu: IEEE: 1175-1183[ DOI: 10.1109/CVPRW.2017.156 http://dx.doi.org/10.1109/CVPRW.2017.156 ]
Kleesiek J, Urban G, Hubert A, Schwarz D, Maier-Hein K, Bendszus M and Armin B. 2016. Deep MRI brain extraction:a 3D convolutional neural network for skull stripping. NeuroImage, 129:460-469[DOI:10.1016/j.neuroimage.2016.01.024]
Li R R, Liu W J, Yang L, Sun S L, Hu W, Zhang F and Li W. 2018. DeepUNet:a deep fully convolutional network for pixel-level sea-land segmentation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(11):3954-3962[DOI:10.1109/JSTARS.2018.2833382]
Liu Z, Xiao B, Li Y and Fan Y. 2019. Context-endcoding for neural network based skull stripping in magnetic resonance imaging[EB/OL].[2020-05-29] . https://arxiv.org/pdf/1910.10798.pdf https://arxiv.org/pdf/1910.10798.pdf
Lucena O, Souza R, Rittner L, Frayne R and Lotufo R. 2019. Convolutional neural networks for skull-stripping in Brain MR imaging using silver standard masks. Artificial Intelligence in Medicine, 98:48-58[DOI:10.1016/j.artmed.2019.06.008]
Puccio B, Pooley J P, Pellman J S, Taverna E C and Craddock R C. 2016. The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data. GigaScience, 5(1):#45[DOI:10.1186/s13742-016-0150-5]
Rex D E, Shattuck D W, Woods R P, Narr K L, Luders E, Rehm K, Stolzner S E, Rottenberg D A and Toga A W. 2004. A meta-algorithm for brain extraction in MRI. NeuroImage, 23(2):625-637[DOI:10.1016/j.neuroimage.2004.06.019]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich: Springer: 234-241[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Shattuck D W, Mirza M, Adisetiyo V, Hojatkashani C, Salamon G, Narr K L, Poldrack R A, Bilder R M and Toga A W. 2008. Construction of a 3D probabilistic atlas of human cortical structures. NeuroImage, 39(3):1064-1080[DOI:10.1016/j.neuroimage.2007.09.031]
Sara S, Samir B, Ahmed H and Bouchaib C. 2014. A robust comparative study of five brain extraction algorithms (BET; BSE; McStrip; SPM2; TMBE)//Proceedings of the 2nd World Conference on Complex Systems. Agadir: IEEE: 632-636[ DOI: 10.1109/ICoCS.2014.7060986 http://dx.doi.org/10.1109/ICoCS.2014.7060986 ]
Shattuck D W, Sandor-Leahy S R, Schaper K A, Rottenberg D A and Leahy R M. 2001. Magnetic resonance image tissue classification using a partial volume model. NeuroImage, 13(5):856-876[DOI:10.1006/nimg.2000.0730]
Smith S M. 2002. Fast robust automated brain extraction. Human Brain Mapping, 17(3):143-155[DOI:10.1002/hbm.10062]
Somasundaram K and Kalaiselvi T. 2010. Fully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images. Computers in Biology and Medicine, 40(10):811-822[DOI:10.1016/j.compbiomed.2010.08.004]
Srivastava N, Geoffrey H, Alex K, Ilya S and Ruslan S. 2014. Dropout:a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958
Szegedy C, Ioffe S, Vanhoucke V and Alemi A. 2017. Inception-v4, inception-ResNet and the impact of residual connections on learning//Proceedings of the 31st AAAI Conference on Artificial Intelligence. USA: AAAI Press: 4278-4284
Valvano G, Martini N, Leo A, Santini G, Latta D D, Ricciardi E and Chiappino D. 2018. Training of a skull-stripping neural network with efficient data augmentation[EB/OL].[2020-05-29] . https://arxiv.org/pdf/1810.10853v1.pdf https://arxiv.org/pdf/1810.10853v1.pdf
Zhou Z W, Siddiquee M M R,Tajbakhsh N and Liang J M. 2018. Unet++: a nested U-Net architecture for medical image segmentation//Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada: Springer: 3-11[ DOI: 10.1007/978-3-030-00889-5_1 http://dx.doi.org/10.1007/978-3-030-00889-5_1 ]
相关作者
相关机构
京公网安备11010802024621