发布时间: 2017-04-16 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.20170411 2017 | Volume 22 | Number 4 医学图像处理

1. 合肥工业大学计算机与信息学院, 合肥 230009;
2. 安徽省医科大学第二附属医院泌尿外科, 合肥 230601
 收稿日期: 2016-09-19; 修回日期: 2016-12-20 基金项目: 国家自然科学基金项目（61371156） 第一作者简介: 詹曙 (1968-), 男, 教授, 2000年于中国科学技术大学电子工程系获信号与信息处理专业博士学位, 主要研究方向为医学影像分析、生物特征识别、计算机视觉和模式识别。E-mail:shu_zhan@hfut.edu.cn 中图法分类号: TP319 文献标识码: A 文章编号: 1006-8961(2017)04-0516-07

# 关键词

Deconvolutional neural network for prostate MRI segmentation
Zhan Shu1, Liang Zhicheng1, Xie Dongdong2
1. School of Computer and Information, Hefei University of Technology, Hefei 230009, China;
2. Department of Urology, the Second Affiliated Hospital of Anhui Medical University, Hefei 230601, China
Supported by: National Natural Science Foundation of China (61371156)

# Abstract

Objective Prostate cancer is one of the leading causes of deaths due to cancer among older men, and its diagnosis experiences many challenging problems. Imaging-based prostate cancer screening, such as magnetic resonance imaging (MRI), requires an experienced medical professional to extensively review the obtained data and perform a diagnosis. The first step in prostate radiation therapy is to identify the difference between the original image and the nearby prostate tissue. However, prostate MRI results face the problems of low organizational boundary contrast ratio and lack of effective areas. Manual segmentation will take considerable time, which cannot meet clinical real-time requirements. Although several methods presented in the MICCAI 2012 challenge achieved reasonable results, they highly depended on feature selection or statistical shape modeling performance, and thus, presented limited success. A segmentation algorithm for prostate MRI based on a deep deconvolutional neural network is proposed to solve the aforementioned deficiencies. Method Inspired by the latest deep learning technology, fully convolutional network, and DeconvNet, we present a multi-layer deconvolutional convolutional network to demonstrate that a deep neural network can dramatically increase the automated segmentation of prostate MRI images compared with systems based on handcrafted features. The deep neural network model exhibits strong feature learning and end-to-end training capacities, which provide better performance than former image processing techniques. Unlike an image classification task, each pixel in an MRI image is regarded as an object that should be classified. Hence, we obtain the final segmentation results by considering the prediction of prostate tissues as a two-stage classification task. This study presents a multi-layer convolutional network, which utilizes a convolution filter, a pooling layer, and a decoder network to transform an input MRI image into a probability map. A convolutional neural network is used in the training process of this model to extract highly distinct image features. Then, a deconvolution strategy is adopted to expand the feature map size and to maintain the sizes of the input image and the output probability map. The stacked convolution and deconvolution layers can maintain resolution size by adding a pad to the input image. In addition to achieving deeper network architecture, the stacked convolution layers exhibit strong robustness against overfitting. Finally, the probability map is used to train a softmax classifier and the final segmentation result is obtained. We replace the classical neuron activation function with a rectified linear unit in our model to speed up the training process and avoid the vanishing gradient. The Dice similarity coefficient is used as the loss function in our convolutional network to overcome the problem of low effective organization in the original image. The images provided in MICCAI 2012 exhibit varying sizes and resolutions, and thus, we preprocess the images and augment the size of the data set via multi-scale cropping and scale transformation to improve training reliability. Result All the experiments are performed on the MICCAI 2012 data set. The algorithm proposed in this study uses the Dice similarity coefficient and Hausdorff distance as evaluation metrics. The Dice similarity coefficient is over 89.75%, whereas Hausdorff distance is shorter than 1.3 mm, which can realize the segmentation accuracy of traditional methods. Furthermore, the processing time is shortened to within 1 min, which is clearly superior to those of other methods. Conclusion The deep learning approach is gradually being applied to the medical field. This study introduces a new deep learning method that is used to segment prostate images. Both the qualitative and quantitative experiments show that the prostate segmenting method based on the deconvolutional neural network can segment MRI images accurately. The proposed method can attain higher segmentation accuracy than the traditional methods. All the calculations are performed on a graphics processing unit, and handling time is considerably shortened compared with those of other segmentation algorithms. Therefore, the proposed model is highly appropriate for the clinical segmentation of prostate images.

# Key words

prostate segmentation; magnetic resonance imaging; convolutional neural network; Dice similarity coefficient; Hausdorff distance

# 1.2 卷积层学习

 ${\boldsymbol{h}^l} = f\left( {\sum\limits_{j = 1}^{{M_x}} {\sum\limits_{k = 1}^{{M_y}} {{w_{jk}}} } *{\boldsymbol{h}^{l-1}}\left( {j, k} \right) + {\boldsymbol{b}^l}} \right)$ (1)

# 2 反卷积前列腺图像分割算法

Long等人[8]于2015年提出全卷积神经网络 (FCN) 实现自然图像的语义分割。他们将卷积网络中最后的全连接层替换为卷积层，并应用上采样和特征层裁剪操作，解决了输入图像大小与输出预测图尺寸不一致的问题，实现图像的逐像素 (pixel-wise) 预测。此后，一系列基于卷积神经网络训练的语义图像分割算法相继提出，并且屡屡刷新语义图像分割的精度。Hyeonwoo等人[9]提出的反卷积网络 (DeconvNet) 是FCN应用的一个拓展，它学习一个多层次的反卷积网络，重建目标细节，有效解决了FCN中容易错分小目标、丢失目标边缘细节等问题。

# 2.2 Dice损耗层

 $D = \frac{{2\sum\limits_i^N {{p_i}{g_i}} }}{{\sum\limits_i^N {{p^2}_i} + \sum\limits_i^N {{g^2}_i} }}$ (4)

 $DSC\left( {\boldsymbol{A}, \boldsymbol{B}} \right) = \frac{{2|\boldsymbol{A} \cap \boldsymbol{B}|}}{{|\boldsymbol{A}| + |\boldsymbol{B}|}}$ (6)

$HD$反映出两个轮廓点集之间的最大差异，定义为

 $HD\left( {\boldsymbol{A}, \boldsymbol{B}} \right) = {\rm{max}}(h\left( {\boldsymbol{A}, \boldsymbol{B}} \right), h(\boldsymbol{B}, \boldsymbol{A}))$ (7)

Table 1 Quantitative comparison of segmentation results

 方法 $DSC$/% $HD$/mm 时间/min Tian[16] 83.4 9.3 4 活动轮廓方法[17] 86 9.5 6~8 Mahapatra[4] 92.1 5.5 23 堆叠ISA[7] 86.7 1.9 1.4 本文 89.75 1.3 0.3

# 参考文献

• [1] Ralph D J, Wylie K R. Ejaculatory disorders and sexual function[J]. BJU International, 2005, 95(9): 1181–1186. [DOI:10.1111/j.1464-410X.2005.05536.x]
• [2] Sharp G, Fritscher K D, Pekar V, et al. Vision 20/20: perspectives on automated image segmentation for radiotherapy[J]. Medical Physics, 2014, 41(5): #050902. [DOI:10.1118/1.4871620]
• [3] Yacoub J H, Oto A, Miller F H. MR imaging of the prostate[J]. Radiologic Clinics of North America, 2014, 52(4): 811–837. [DOI:10.1016/j.rcl.2014.02.010]
• [4] Mahapatra D, Buhmann J M. Prostate MRI segmentation using learned semantic knowledge and graph cuts[J]. IEEE Transactions on Biomedical Engineering, 2014, 61(3): 756–764. [DOI:10.1109/TBME.2013.2289306]
• [5] Heimann T, Meinzer H P. Statistical shape models for 3D medical image segmentation: a review[J]. Medical Image Analysis, 2009, 13(4): 543–563. [DOI:10.1016/j.media.2009.05.004]
• [6] Yang M J, Li X L, Turkbey B, et al. Prostate segmentation in MR images using discriminant boundary features[J]. IEEE Transactions on Biomedical Engineering, 2013, 60(2): 479–88. [DOI:10.1109/TBME.2012.2228644]
• [7] Liao S, Gao Y Z, Oto A, et al. Representation learning: a unified deep learning framework for automatic prostate MR segmentation[C]//Proceedings of the 16th International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2013. Berlin Heidelberg: Springer, 2013: 254-261.[DOI: 10.1007/978-3-642-40763-5_32]
• [8] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. [DOI:10.1109/TPAMI.2016.2572683]
• [9] Noh H, Hong S, Han B. Learning deconvolution network for semantic segmentation[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 1520-1528.[DOI: 10.1109/ICCV.2015.178]
• [10] Yu K, Jia L, Chen Y Q, et al. Deep learning: yesterday, today, and tomorrow[J]. Journal of Computer Research and Development, 2013, 50(9): 1799–1804. [余凯, 贾磊, 陈雨强, 等. 深度学习的昨天、今天和明天[J]. 计算机研究与发展, 2013, 50(9): 1799–1804. ]
• [11] Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324. [DOI:10.1109/5.726791]
• [12] Mamoshina P, Vieira A, Putin E, et al. Applications of deep learning in biomedicine[J]. Molecular Pharmaceutics, 2016, 13(5): 1445–1454. [DOI:10.1021/acs.molpharmaceut.5b00982]
• [13] Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the 27th International Conference on Machine Learning. Haifa, Israel: Omnipress, 2010: 807-814.
• [14] Jia Y Q, Shelhamer E, Donahue J, et al. Caffe: convolutional architecture for fast feature embedding[C]//Proceedings of the 22nd ACM international conference on Multimedia. New York, USA: ACM, 2014: 675-678.[DOI: 10.1145/2647868.2654889]
• [15] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
• [16] Tian Z Q, Liu L Z, Fei B W. A fully automatic multi-atlas based segmentation method for prostate MR images[C]//Proceedings of the SPIE 9413, Medical Imaging 2015: Image Processing. Orlando, FL: SPIE, 2015, 9413: #941340.[DOI: 10.1117/12.2082229]
• [17] Kirschner M, Jung F, Wesarg S. Automatic prostate segmentation in MR images with a probabilistic active shape model[C]//Proceedings of PROMISE 2012-MICCAI 2012 Grand Challenge: Prostate MR Image Segmentation. Nice, France: Springer, 2012.