面向乳腺超声图像分割的混合监督双通道反馈U-Net
Hybrid supervised dual-channel feedback U-Net for segmentation of breast ultrasound images
- 2020年25卷第10期 页码:2206-2217
收稿:2020-05-31,
修回:2020-7-8,
录用:2020-7-15,
纸质出版:2020-10
DOI: 10.11834/jig.200240
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-05-31,
修回:2020-7-8,
录用:2020-7-15,
纸质出版:2020-10
移动端阅览
目的
2
基于超声图像的乳腺病灶分割是实现乳腺癌计算机辅助诊断和定量分析的基本预处理步骤。由于乳腺超声图像病灶边缘通常较为模糊,而且缺乏大量已标注的分割图像,增加了基于深度学习的乳腺超声图像分割难度。本文提出一种混合监督双通道反馈U-Net(hybrid supervised dual-channel feedback U-Net,HSDF-U-Net)算法,提升乳腺超声图像分割的准确性。
方法
2
HSDF-U-Net通过融合自监督学习和有监督分割实现混合监督学习,并且进一步通过设计双通道反馈U-Net网络提升图像分割准确性。为了改善标记数据有限的问题,首先在自监督学习框架基础上结合标注分割图像中的标签信息,设计一种边缘恢复的辅助任务,以实现对病灶边缘表征能力更强的预训练模型,然后迁移至下游图像分割任务。为了提升模型在辅助边缘恢复任务和下游分割任务的表现,将循环机制引入经典的U-Net网络,通过将反馈的输出结果重新送入另一个通道,构成双通道编码器,然后解码输出更精确的分割结果。
结果
2
在两个公开的乳腺超声图像分割数据集上评估HSDF-U-Net算法性能。HSDF-U-Net对Dataset B数据集中的图像进行分割获得敏感度为0.848 0、Dice为0.826 1、平均对称表面距离为5.81的结果,在Dataset BUSI(breast ultrasound images)数据集上获得敏感度为0.803 9、Dice为0.803 1、平均对称表面距离为6.44的结果。与多种典型的U-Net分割算法相比,上述结果均有提升。
结论
2
本文所提HSDF-U-Net算法提升了乳腺超声图像中的病灶分割的精度,具备潜在的应用价值。
Objective
2
In the clinical diagnosis and treatment of breast cancer
ultrasound imaging is widely used because of its real-time
non-radiation
and low cost. Automatic segmentation of breast lesions is a basic preprocessing step for computer-aided diagnosis and quantitative analysis of breast cancer. However
breast ultrasound segmentation is challenging. First
more noises and artifacts can be found in ultrasound images
and the boundary of the lesions is more ambiguous than the foreground in general segmentation tasks. Second
the size of the lesions in different sample images is different. In addition
benign and malignant lesions are quite different. The segmentation effect depends on the ability of the algorithm to understand the overall image. However
traditional methods rely on the characteristics of artificial design
which is difficult to deal with the noise and image structures. In recent years
excellent segmentation models such as U-Net are identified in the field of medical image segmentation. Many algorithms are based on U-Net
such as Auto-U-Net. Auto-U-Net uses the idea of iterative training to form a new input from the probability graph of model output and the original graph and sends them to the new U-Net model for training. However
the number of models needed in Auto-U-Net is the same as the total number of iterations
which leads to a complex training process and inefficient parameter utilization. The segmentation algorithm based on deep learning has a certain demand for data scale and annotation quality
whereas the professional requirements for accurate annotation of medical image data are high. Therefore
the number of samples cannot be guaranteed
resulting in the limited performance of the deep learning model. For the above mentioned challenges
in addition to transfer learning methods
using self-supervised learning to assist the training process is also a feasible solution. Considering that self-supervised learning emphasizes self-learning
this feature can deal with the problem of high cost in medical image field. Compared with the common transfer learning method
the advantage of self-supervised learning in the field of medical image lies in the stronger correlation between pretext task and target task. At present
in the field of medical image
the focus of research on self-supervised learning method is self-monitoring learning
whereas semantic segmentation is only a downstream task to evaluate self-supervised learning to features. In this process
the assistant task lacks the effective use of label information. Facing these limitations
this paper proposes a hybrid supervised dual-channel feedback U-Net (HSDF-U-Net) to improve the accuracy of breast ultrasound image segmentation.
Method
2
HSDF-U-Net achieves hybrid supervised learning by integrating self-supervised learning and supervised segmentation and improves the accuracy of image segmentation by developing a dual channel feedback U-Net network. The algorithm designs the edge recovery task on the basis of the information in the segmentation label to enhance the correlation between the pretext task and the target task in self-supervised learning. The location information of contour pixels is extracted from the segmentation label. The images with ambiguous edge and the images with gray value close to the segmentation mask are obtained by using this information. They are used as input and label of deep learning to obtain the pre-training model with stronger ability to represent the edge of lesions and then transferred to the downstream image segmentation task. In addition
the feedback mechanism is introduced into U-Net to improve the performance of the model in the pretext edge restoration task and the downstream segmentation task. The mechanism is based on the general feed-forward convolutional neural network(CNN) and integrates the idea of weight sharing in the recurrent neural network. Through feeding back feature map
the prediction results are continuously refined. Therefore
we propose a dual channel feedback U-Net. The output probability map is fed back to the coding stage as the input of the encoder part probability channel. It forms a dual channel input together with ultrasonic image
which is encoded and fused separately before decoding. Consequently
the prediction results are continuously refined.
Result
2
The performance of HSDF-U-Net algorithm was evaluated on two open breast ultrasound image segmentation datasets. HSDF-U-Net segmented the image in Dataset B obtained sensitivity of 0.848 0
dice of 0.826 1
and the average symmetrical surface distance of 5.81. The sensitivity of 0.803 9
dice of 0.803 1
and the average symmetrical surface distance of 6.44 were obtained on Dataset breast ultrasound images(BUSI). The above mentioned results were improved compared with the typical deep learning segmentation algorithm.
Conclusion
2
In this study
the proposed HSDF-U-Net improves the accuracy of breast ultrasound image segmentation
indicating potential application value.
Al-Dhabyani W, Gomaa M, Khaled H and Fahmy A. 2020. Dataset of breast ultrasound images. Data in Brief, 28:#104863[DOI:10.1016/j.dib.2019.104863]
Bian Z J, Qin W J, Liu J R and Zhao D Z. 2018. Review of anatomic segmentation methods in thoracic CT images. Journal of Image and Graphics, 23(10):1450-1471
边子健, 覃文军, 刘积仁, 赵大哲. 2018.肺部CT图像中的解剖结构分割方法综述.中国图象图形学报, 23(10):1450-1471)[DOI:10.11834/jig.180067]
Chen L, Bentley P, Mori K, Misawa K, Fujiwara M and Rueckert D. 2019. Self-supervised learning for medical image analysis using image context restoration. Medical Image Analysis, 58:#101539[DOI:10.1016/j.media.2019.101539]
Cheplygina V, de Bruijne M and Pluim J P W. 2019. Not-so-supervised:a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Medical Image Analysis, 54:280-296[DOI:10.1016/j.media.2019.03.009]
Dou Q, Chen H, Jin Y M, Yu L Q, Qin J and Heng P A. 2016.3D deeply supervised network for automatic liver segmentation from CT volumes//Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens, Greece: Springer: 149-157[ DOI:10.1007/978-3-319-46723-8_18 http://dx.doi.org/10.1007/978-3-319-46723-8_18 ]
Hesamian M H, Jia W J, He X J and Kennedy P. 2019. Deep learning techniques for medical image segmentation:achievements and challenges. Journal of Digital Imaging, 32(4):582-596[DOI:10.1007/s10278-019-00227-x]
Huang Q H, Luo Y Z and Zhang Q Z. 2017. Breast ultrasound image segmentation:a survey. International Journal of Computer Assisted Radiology and Surgery, 12(3):493-507[DOI:10.1007/s11548-016-1513-1]
Jiang Z K, Lyu X G, Zhang J X, Zhang Q and Wei X P. 2020. Review of deep learning methods for MRI brain tumor image segmentation. Journal of Image and Graphics, 25(2):215-228
江宗康, 吕晓钢, 张建新, 张强, 魏小鹏. 2020. MRI脑肿瘤图像分割的深度学习方法综述.中国图象图形学报, 25(2):215-228)[DOI:10.11834/jig.190173]
Jing L L and Tian Y L. 2020. Self-supervised visual feature learning with deep neural networks:a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-1[DOI:10.1109/tpami.2020.2992393]
Kim W H, Moon W K, Kim S J, Yi A, Yun B L, Cho N, Chang J M, Koo H R, Kim M Y, Bae M S, Lee S H, Kim J Y andLee E H. 2013. Ultrasonographic assessment of breast density. Breast Cancer Research and Treatment, 138(3): 851-859[ DOI:10.1007/s10549-013-2506-1 http://dx.doi.org/10.1007/s10549-013-2506-1 ]
Kolesnikov A, Zhai X H and Beyer L. 2019. Revisiting self-supervised visual representation learning//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 1920-1929[ DOI:10.1109/cvpr.2019.00202 http://dx.doi.org/10.1109/cvpr.2019.00202 ]
Lee C Y, Xie S N, Gallagher P, Zhang Z Y and Tu Z W. 2014. Deeply-supervised nets[EB/OL].[2020-05-25] . https://arxiv.org/pdf/1409.5185.pdf https://arxiv.org/pdf/1409.5185.pdf
Li Z, Yang J L, Liu Z, Yang X M, Jeon G and Wu W. 2019. Feedback network for image super-resolution//Proceedingso fo 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 3867-3876[ DOI:10.1109/CVPR.2019.00399 http://dx.doi.org/10.1109/CVPR.2019.00399 ]
Litjens G, Kooi T, Bejnordi B E, Setio A A A, Ciompi F, Ghafoorian M, van der Laak J A W M, van Ginneken B and Sánchez C I. 2017. A survey on deep learning in medical image analysis. Medical Image Analysis, 42: 60-88[ DOI:10.1016/j.media.2017.07.005 http://dx.doi.org/10.1016/j.media.2017.07.005 ]
Noble J A and Boukerroui D. 2006. Ultrasound image segmentation:a survey. IEEE Transactions on Medical Imaging, 25(8):987-1010[DOI:10.1109/tmi.2006.877092]
Oktay O, Schlemper J, Le Folgoc L, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla N Y, Kainz B, Glocker B and Rueckert D. 2018. Attention U-net: learning where to look for the pancreas[EB/OL].[2020-05-25] . https://arxiv.org/abs/1804.03999.pdf https://arxiv.org/abs/1804.03999.pdf
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[ DOI:10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Sahiner B, Chan H P, Roubidoux M A, Hadjiiski L M, Helvie M A, Paramagul C, Bailey J, Nees V A and Blane C. 2007. Malignant and benign breast masses on 3D US volumetric images:effect of computer-aided diagnosis on radiologist accuracy. Radiology, 242(3):716-724[DOI:10.1148/radiol.2423051464]
Salehi S S M, Erdogmus D and Gholipour A. 2017. Auto-context convolutional neural network (Auto-Net) for brain extraction in magnetic resonance imaging. IEEE Transactions on Medical Imaging, 36(11):2319-2330[DOI:10.1109/tmi.2017.2721362]
Tan C Q, Sun F C, Kong T, Zhang W C, Yang C and Liu C F. 2018. A survey on deep transfer learning//Proceedings of the 27th International Conference on Artificial Neural Networks and Machine Learning. Rhodes, USA: Springer: 270-279[ DOI:10.1007/978-3-030-01424-7_27 http://dx.doi.org/10.1007/978-3-030-01424-7_27 ]
Yap M H, Pons G, Martí J, Ganau S, Sentís M, Zwiggelaar R, Davison A K and Marti R. 2018. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE Journal of Biomedical and Health Informatics, 22(4):1218-1226[DOI:10.1109/jbhi.2017.2731873]
Zhao W, YangJ C, Sun Y L, Li C, Wu W L, Jin L, Yang Z M, Ni B B, Gao P, Wang P J, Hua Y Q and Li M. 2018.3D deep learning from CT scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Research, 78(24):6881-6889[DOI:10.1158/0008-5472.CAN-18-0696]
Zhou Z W, Siddiquee M M R, Tajbakhsh N and Liang J M. 2018. UNet++: a nested U-net architecture for medical image segmentation//Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada, Spain: Springer: 3-11[ DOI:10.1007/978-3-030-00889-5_1 http://dx.doi.org/10.1007/978-3-030-00889-5_1 ]
Zhou Z W, Sodha V, Rahman Siddiquee M M, Feng R B, Tajbakhsh N, Gotway M B and Liang J M. 2019. Models genesis: generic autodidactic models for 3D medical image analysis//Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention. Shenzhen, China: Springer: 384-393[ DOI:10.1007/978-3-030-32251-9_42 http://dx.doi.org/10.1007/978-3-030-32251-9_42 ]
相关作者
相关机构
京公网安备11010802024621