Current Issue Cover
基于多层级并行神经网络的多模态脑肿瘤图像分割框架

丁熠, 郑伟, 耿技, 邱泸谊, 秦志光(电子科技大学信息与软件工程学院, 成都 610054)

摘 要
目的 在脑肿瘤临床诊疗过程中,由于医疗资源稀缺与诊断效率偏低,迫切需要高精度的医学图像分割工具进行辅助诊疗。目前,使用卷积神经网络进行脑肿瘤图像分割已经成为主流,但是其对于脑肿瘤信息的利用并不充分,导致精度与效率并不完善,而且重新设计一个全新且高效的深度神经网络模型是一项成本高昂的任务。为了更有效提取脑肿瘤图像中的特征信息,提出了基于多层级并行神经网络的多模态脑肿瘤图像分割框架。方法 该框架基于现有的网络结构进行拓展,以ResNet (residual network)网络为基干,通过设计多层级并行特征提取模块与多层级并行上采样模块,对脑肿瘤的特征信息进行高效提取与自适应融合,增强特征信息的提取与表达能力。另外,受U-Net长连接结构的启发,在网络中加入多层级金字塔长连接模块,用于输入的不同尺寸特征之间的融合,提升特征信息的传播效率。结果 实验在脑肿瘤数据集BRATS2015 (brain tumor segmentation 2015)和BRATS2018(brain tumor segmentation 2018)上进行。在BRATS2015数据集中,脑肿瘤整体区、核心区和增强区的平均Dice值分别为84%、70%和60%,并且分割时间为5 s以内,在分割精度和时间方面都超过了当前主流的分割框架。在BRATS2018数据集中,脑肿瘤整体区、核心区和增强区的平均Dice值分别为87%、76%和71%,对比基干方法分别提高8%、7%和6%。结论 本文提出多层级并行的多模态脑肿瘤分割框架,通过在脑肿瘤数据集上的实验验证了分割框架的性能,与当前主流的脑肿瘤分割方法相比,本文方法可以有效提高脑肿瘤分割的精度并缩短分割时间,对计算机辅助诊疗有重要意义。
关键词
Multi-level parallel neural networks based multimodal human brain tumor image segmentation framework

Ding Yi, Zheng Wei, Geng Ji, Qiu Luyi, Qin Zhiguang(School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China)

Abstract
Objective Clinical-oriented magnetic resonance imaging(MRI)has its priority to analyze human brain tumors in relevance to such fields of MRI brain images of Flair,T1,T1c,and T2 modalities. The glioma can be oriented as the most common type of brain tumor for adults nowadays. Due to the features of its anatomical structure,such of lesions can be visually clarified in terms of MRI images analysis. The difference between medical treatment and scarcity of medical expertise are required of computers-assisted diagnosis and treatment. Recent deep learning technology for brain tumor segmentation has demonstrated its great potential for brain tumor image segmentation. In addition,to improve the segmentation accuracy of brain tumor images further,current literature reviews are focused on optimization of network feature extraction ability via an extraordinary network structure,and brain tumor images information are related to its multi-resolution information,spatial multi-view information,information post-processing,and symmetry information. Various of deep neural network(DNN)models have been developing in computer vision in recent years, such as Visual Geometry Group Network(VGGNet),GoogLeNet,ResNet,and DenseNet. The DNN model mentioned above can facilitate the development of deep learning-based brain tumor diagnosis methods. To extract feature information in brain tumor images more effectively, we develop a multi-level parallel neural networks based multimodal brain tumor image segmentation framework further. Method To enhance the ability of feature information extraction and expression,this framework is facilitated derived from the existing network backbone,and the feature information of brain tumors can be extracted and fused adaptively via a multi-level parallel feature extraction module and parallel up-sampling module. Deeper features can be extracted in the depth of the network and iterate multiple backbone network branches for feature extraction in parallel. The layer-by-level connection of the neural network can not only broaden the width of the neural network but also mine the depth of the neural network. As a result,multi-level parallel feature extraction structure has its stronger and richer nonlinear representation capabilities than the single-level feature extraction structure,and more complex mapping transformations are fit into more complex image features as well. To preserve the richness of features,hierarchical parallel feature extraction structure has sufficient network width to extract various attribute information of images,such as different colors,shapes,spatial relationships,textures,and other related features. Furthermore,inspired by long connected structure of U-Net,a multi-level pyramid long connection module is melted into the network to fully achieve the integration of the input features of different sizes and improve the transmission efficiency of feature information. The richness of the feature is enhanced in terms of multilevel pyramid long connection module. Meanwhile,the input end of the multi-level pyramid long connection module can be used to fully analyze the information fusion between layers of different sizes. All of them can alleviate the loss and deformation of image information to a certain extent,which can improve the propagation efficiency of features of the same size at both ends of a long connection. It can affect the segmentation accuracy of multimodal brain tumor images ultimately. Result To verify the overall performance of the algorithm,an evaluation is first carried out on the testing set of the public brain tumor dataset BraTS2015. The average Dice scores of the proposed algorithm in the entire tumor,tumor core,and enhanced tumor areas can be reached to 84%,70%,and 60% of each. It can optimize segmentation duration to less than 5 s farther. Some other related comparative experiments are linked to such modules of feature extraction,up-sampling,and the pyramid long connection,and the effectiveness of each module is compared with the backbone method as well. An experiment is conducted on the BraTS2018 validation set. The proposed algorithm can achieve average Dice scores of 87%,76%,and 71% of each. Compared to the backbone method, it illustrates higher average Dice scores of 8. 0%, 7. 0%,and 6. 0%. Conclusion We extend the common network backbone and propose a multimodal brain tumor image segmentation framework based on a multi-level parallel neural network. We develop a multi-level parallel expansion at the same time. The hierarchical pyramid long connection module can be used to optimize original long connection modelderived multi-scale and receptive field-relevant unclear information,and the richness of features can be improved as well. Multi-level parallel neural network-based segmentation framework is demonstrated to optimize segmentation accuracy and efficiency.
Keywords

订阅号|日报