乳腺超声图像中易混淆困难样本的分类方法
Classification method for samples that are easy to be confused in breast ultrasound images
- 2020年25卷第7期 页码:1490-1500
收稿:2019-08-29,
修回:2019-12-16,
录用:2019-12-23,
纸质出版:2020-07-16
DOI: 10.11834/jig.190442
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-08-29,
修回:2019-12-16,
录用:2019-12-23,
纸质出版:2020-07-16
移动端阅览
目的
2
超声诊断常作为乳腺肿瘤首选的影像学检查和术前评估方法,但存在良恶性结节的图像表现重叠、诊断严重依赖医生经验,以及需要较多人机交互等问题。为减少误诊和不必要的穿刺活检率,以及提高诊断自动化程度,本文提出一种端到端的模型,实现结节区域自动提取及良恶性鉴别。
方法
2
就超声图像散斑噪声问题使用基于边缘增强的各向异性扩散去噪模型(edge enhanced anisotropic diffusion,EEAD)实现数据预处理,之后针对结节良恶性特征提出一个改进的损失函数以增强鉴别性能,通过形状描述符组合挖掘因形状与其他类别相似从而易导致错判的困难样本,为使该部分困难样本具有更好的区分性,应用改进的损失函数,并在此基础上构建困难样本形状约束损失项,用来调整形状相似但类别不同样本间的特征映射。
结果
2
为验证算法的有效性,构建了一个包含1 805幅图像的乳腺超声数据集,在该数据集上具有5年资历医生的平均判断准确率为85.3%,而本文方法在该数据集上分类正确率为92.58%,敏感性为90.44%,特异性为93.72%,AUC(area under curve)为0.946,均优于对比算法;相对传统Softmax损失函数,各评价指标提高了5% 12%。
结论
2
本文提出了一个端到端的乳腺超声图像分类方法,实用性强;通过将医学知识融合到优化模型,增加的困难样本形状约束损失项可提高乳腺肿瘤良恶性诊断的准确性和鲁棒性,各项评价指标均高于超声科医生,具有临床应用价值。
Objective
2
Ultrasound is the primary imagological examination and preoperative assessment for breast nodules. In the qualitative diagnosis of nodules in breast ultrasound images
the breast imaging and reporting data system (BI-RADS) with six levels is commonly used by physicians to evaluate the degree of malignant breast lesions. However
BI-RADS evaluation is time-consuming and mostly based on the morphological features and partial acoustic information of a lesion. Diagnosis relies heavily on the experience of physicians because of the overlapping image expression of benign and malignant breast nodules. The diagnostic accuracy of physicians with different qualifications can differ by up to 30%. Therefore
misdiagnosis or missed diagnosis can easily occur
increasing the needless rate of puncture biopsy. The current computer-assisted breast ultrasound diagnosis requires considerable human interactions. The automation level is low
and accuracy is unfavorable. In recent years
the deep learning method has been applied to the visual tasks such as medical ultrasound image classification and achieved good results. This study proposes an end-to-end model for automatic nodule extraction and classification.
Method
2
An ultrasound image of the breast is the result of using ultrasonic signals that are reflected by ultrasound when it encounters tissues in the human body. Given the limitation of the imaging mechanism of medical ultrasound
noise interference in breast ultrasound images is typically severe and mostly affected by additive thermal and multiplicative speckle noises. Thermal noise is caused by the heat generated by the capturing device and can be avoided via physical cooling. Speckle noise is a bright and dark particle-like spot formed by the constructive and destructive interference of reflected ultrasonic waves; it is unavoidable because of the principle of ultrasound imaging. For the model of speckle noise in an ultrasound image
this work uses edge enhanced anisotropic diffusion(EEAD) to remove noise as a preprocessing step. Then
an improved loss function is proposed to enhance the discriminant performance of our method with respect to nodules' characteristics of benign and malignant parts. We use a combination of shape descriptors (concavity
aspect ratio
compactness
circle variance
and elliptic variance) to describe difficult samples with similar shapes to other classes
which can be apt to misjudgment. To make such difficult samples more distinguishable
an improved loss function is developed in this study. This function builds the shape constraint loss term of difficult samples to adjust feature mapping.
Result
2
A breast ultrasound dataset with 1 805 images is collected to validate our method. For this dataset
the average diagnosis accuracy(AUC) of physicians with five-year qualifications is 85.3%. However
the classification accuracy
sensitivity
specificity
and area under the curve of our method are 92.58%
90.44%
93.72%
and 0.946
respectively
which are superior to those of the comparison algorithms. Moreover
compared with the traditional softmax loss function
the performance can be increased by 5%—12%.
Conclusion
2
In this study
the classification of benign and malignant nodules in two-dimensional ultrasound images of the breast is the main research content
and the advanced achievements in machine learning
computer vision and medicine are taken as the technical support. Aiming at the problems of poor quality of two-dimensional ultrasound images of the breast
small proportion of lesions
overlapping of benign and malignant nodule images
and heavy dependence on doctors' experience in diagnosis
this paper improves the data preprocessing level and algorithm At the level of data preprocessing
the original breast ultrasound data is denoised and expanded to improve the quality of the data set and maximize the utilization of the data set; at the level of algorithm improvement
the model training process is dynamically monitored to dynamically mine difficult samples and regularize the distance between classes to improve the classification effect of benign and malignant. In this study
an end-to-end ultrasound image analysis model of the breast is proposed. This model is pragmatic and useful for clinics. By incorporating medical knowledge into the optimization process and adding the shape constraint loss of difficult samples
the accuracy and robustness of breast benign and malignant diagnoses are considerably improved. Each evaluation result is even higher than that of ultrasound physicians. Thus
our method has a high clinical value.
Bray F, Ferlay J, Soerjomataram I, Siegel R L, Torre L A and Jemal A. 2018. Global cancer statistics 2018:GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA:A Cancer Journal for Clinicians, 68(6):394-424[DOI:10.3322/caac.21492]
Chi J N, Yu X S and Zhang Y F. 2018. Thyroid nodule malignantrisk detection in ultrasound image by fusing deep and texture features. Journal of Image and Graphics, 23(10):1582-1593
迟剑宁, 于晓升, 张艺菲. 2018.融合深度网络和浅层纹理特征的甲状腺结节癌变超声图像诊断.中国图象图形学报, 23(10):1582-1593)[DOI:10.11834/jig.180232]
Deng J K, Guo J, Xue N N and Zafeiriou S. 2019. Arcface: additive angular margin loss for deep face recognition//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA, : IEEE: 4685-4694[ DOI: 10.1109/CVPR.2019.00482 http://dx.doi.org/10.1109/CVPR.2019.00482 ]
Ding J R, Huang J H, Liu J F and Zhang Y T. 2013. Combining local features and multi-instance learning for ultrasound image classification. Acta Automatica Sinica, 39(6):861-867
丁建睿, 黄剑华, 刘家锋, 张英涛. 2013.局部特征与多示例学习结合的超声图像分类方法.自动化学报, 39(6):861-867)[DOI:10.3724/SP.J.1004.2013.00861]
Flores W G, De Albuquerque Pereira W C and Infantosi A F C. 2015. Improving classification performance of breast lesions on ultrasonography. Pattern Recognition, 48(4):1125-1136[DOI:10.1016/j. patcog.2014.06.006]
Fu S J, Ruan Q Q, Wang W Q and Li Y. 2005. Adaptive anisotropic diffusion for ultrasonic image denoising and edge enhancement. International Journal of Information Technology, 2(4):284-292
Han S, Kang H K, Jeong J Y, Park M H, Kim W, Bang W C and Seong Y K. 2017. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Physics in Medicine and Biology, 62(19):7714-7728[DOI:10.1088/1361-6560/aa82ec]
He K M, Gkioxari G, Dollár P and Girshick R. 2020. Mask R-CNN. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2):386-397[DOI:10.1109/TPAMI.2018.2844175]
Hinton G, Vinyals O and Dean J. 2015. Distilling the knowledge in a neural network[EB/OL].[2019-08-14] . https://arxiv.org/pdf/1503.02531.pdf https://arxiv.org/pdf/1503.02531.pdf
Huynh B, Drukker K and Giger M. 2016. MO-DE-207B-06:computer-aided diagnosis of breast ultrasound images using transfer learning from deep convolutional neural networks. Medical Physics, 43(6):3705[DOI:10.1118/1.4957255]
KongX H, Tan T, Bao L Y and Wang G Z. 2018. Classification of breast mass in 3D ultrasound images with annotations based on convolutional neural networks. Chinese Journal of Biomedical Engineering, 37(4):414-422
孔小函, 檀韬, 包凌云, 王广志. 2018.基于卷积神经网络和多信息融合的三维乳腺超声分类方法.中国生物医学工程学报, 37(4):414-422)[DOI:10.3969/j.issn.0258-8021.2018.04.004]
Liu S F, Wang Y, Yang X, Lei B Y, Liu L, Li S X, Ni D and Wang T F. 2019. Deep learning in medical ultrasound analysis:a review. Engineering, 5(2):261-275[DOI:10.1016/j.eng.2018.11.020]
Liu W Y, Wen Y D, Yu Z D, Li M, Raj B and Song L. 2017. SphereFace: deep hypersphere embedding for face recognition//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE: 6738-6746[ DOI: 10.1109/CVPR.2017.713 http://dx.doi.org/10.1109/CVPR.2017.713 ]
Liu W Y, Wen Y D, Yu Z D and Yang M. 2016. Large-margin softmax loss for convolutional neural networks//Proceedings of the 33rd International Conference on Machine Learning. New York, USA: PMLR: 507-516
Mohammed M A, Al-Khateeb B, Rashid A N, Ibrahim D A, Abd Ghani M K and Mostafa S A. 2018. Neural network and multi-fractal dimension features for breast cancer classification from ultrasound images. Computers and Electrical Engineering, 70:871-882[DOI:10.1016/j.compeleceng.2018.01.033]
Ranjan R, Castillo C D and Chellappa R. 2017. L2-constrained softmax loss for discriminative face verification[EB/OL].[2019-08-14] . https://arxiv.org/pdf/1703.09507.pdf https://arxiv.org/pdf/1703.09507.pdf
Shi J, Zhou S C, Liu X, Zhang Q, Lu M H and Wang T F. 2016. Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing, 194:87-94[DOI:10.1016/j.neucom.2016.01.074]
Shin S Y, Lee S, Yun I D, Kim S M and Lee K M. 2019. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Transactions on Medical Imaging, 38(3):762-774[DOI:10.1109/TMI.2018.2872031]
Wang F, Cheng J, Liu W Y and Liu H J. 2018a. Additive margin softmax for face verification. IEEE Signal Processing Letters, 25(7):926-930[DOI:10.1109/LSP.2018.2822810]
Wang H, Wang Y T, Zhou Z, Ji X, Gong D H, Zhou J C, Li Z F and Liu W. 2018b. CosFace: large margin cosine loss for deep face recognition//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE: 5265-5274[ DOI: 10.1109/CVPR.2018.00552 http://dx.doi.org/10.1109/CVPR.2018.00552 ]
Yang L, Zheng R S, Wang N, Yuan Y N, Liu S, Li H C, Zhang S W, Zeng H M and Chen W Q. 2018. Incidence and mortality of stomach cancer in China, 2014. Chinese Journal of Cancer Research, 30(3):291-298[DOI:10.21147/j.issn.1000-9604.2018.03.01]
Yu Y J and Acton S T. 2002. Speckle reducing anisotropic diffusion. IEEE Transactions on Image Processing, 11(11):1260-1270[DOI:10.1109/TIP.2002.804276]
Zhou S C, Shi J, Zhu J, Cai Y and Wang R L. 2013. Shearlet-based texture feature extraction for classification of breast tumor in ultrasound image. Biomedical Signal Processing and Control, 8(6):688-696[DOI:10.1016/j.bspc.2013.06.011]
相关作者
相关机构
京公网安备11010802024621