结合注意力机制的乳腺双模态超声分类网络
Attention-based networks of human breast bimodal ultrasound imaging classification
- 2022年27卷第3期 页码:911-922
收稿:2021-06-01,
修回:2021-9-24,
录用:2021-10-1,
纸质出版:2022-03-16
DOI: 10.11834/jig.210370
移动端阅览

浏览全部资源
扫码关注微信
收稿:2021-06-01,
修回:2021-9-24,
录用:2021-10-1,
纸质出版:2022-03-16
移动端阅览
目的
2
影像学医师通常通过观察乳腺B型超声(brightness-mode ultrasound)肿瘤区域进行良恶性分析,针对难以辨别的病例则融合其对应的超声造影(contrast-enhanced ultrasound
CEUS)特征进一步判别。由于超声图像灰度值范围变化小、良恶性表现重叠,特征提取模型如果不能关注到病灶区域将导致分类错误。为增强网络模型对重点区域的分析,本文提出一种基于病灶区域引导的注意力机制,同时融合双模态数据,实现乳腺超声良恶性的精准判别。
方法
2
通过对比实验,选取一个适合超声图像特征提取的主干分类模型ResNet34;为学习到更有分类意义的特征,以分割结节的掩膜图(region of interest,ROI-mask)作为引导注意力来修正浅层空间特征;将具有分类意义的超声造影各项评价特征向量化,与网络提取的深层特征进行融合分类。
结果
2
首先构建一个从医院收集的真实病例的乳腺超声数据集BM-Breast(breast ultrasound images dataset),与常见分类框架ResNet、Inception等进行对比实验,并与相关最新乳腺分类研究成果对比,结果显示本文设计的算法在各项指标上都有较大优势。本文提出的融合算法的分类准确性为87.45%,AUC(area under curve)为0.905。为了评估对注意力引导机制算法设计的结果,在本文实验数据集和公开数据集上分别进行实验,精度相比对比算法提升了3%,表明本文算法具有较好的泛化能力。实验结果表明,融合两种模态超声数据的特征可以提升最终分类精度。
结论
2
本文提出的注意力引导模型能够针对乳腺超声成像特点学习到可鉴别的分类特征,双模态数据特征融合诊断方法进一步提升了模型的分类能力。高特异性指标表现出模型对噪声样本的鲁棒性,能够较为准确地辨别出难以判别的病例,本文算法具有较高的临床指导价值。
Objective
2
Brightness-mode ultra-sound images are usually generated from the interface back to the probe by the echo of sound waves amongst different tissues. This method has its priority for no ionizing radiation and low price. It has been recognized as one of the most regular medical imaging by clinicians and radiologists. Imaging physicians usually diagnose tumors via breast ultrasound tumor regions observation. These features are fused with the corresponding contrast-enhanced ultrasound features to enhance visual information for further discrimination. Therefore
the lesion area can provide effective information for discriminating the shape and boundary. Machine learning
especially deep learning
can learn the middle-level and high-level abstract features from the original ultrasound
generating several applicable methods and playing an important role in the clinic
such as auxiliary diagnosis and image-guided treatment. Currently
it is widely used in tumor diagnosis
organ segmentation
and region of interest (ROI) detection and tracking. Due to the constrained information originated from B-model ultrasound and its overlapping phenomenon
the neural network cannot focus on the lesion area of the image with poor imaging quality during the feature extraction
resulting in classification errors. Therefore
in order to improve the accuracy of human breast ultrasound diagnosis
this research illustrates an end-to-end automatic benign and malignant classification model fused with bimodal data to realize an accurate diagnosis of human breast ultrasound.
Method
2
First
a backbone networks
of ResNet34 is optioned based on experimental comparison. It can learn more clear features for classification
especially for breast ultrasound images with poor imaging effects. Hence
the ultrasound tumor segmentation mask is as a guide to strengthening the learning of the key features of classification based on the residual block. The model can concentrate on the targeting regions via reducing the interference of tumor overlapping phenomenon and lowering the influence of poor image quality. An attention guidance mechanism is facilitated to enhance key features
but the presence of noise samples in the sample
such as benign tumors exhibiting irregular morphology
edge burrs
and other malignant tumor features
will reduce the accuracy of model classification. The contrast-enhanced ultrasound morphological features can be used to distinguish the pathological results of tumors further. Therefore
we use the natural language analysis method to convert the text of the pathological results into a feature vector based on the effective contrast-enhanced ultrasound (CEUS) pathological notations simultaneously
our research analysis visualizes the spatial distribution of the converted vectors to verify the usability of the pathological results of CEUS in breast tumors classification. It is sorted out that the features are distributed in clusters and polarities
which demonstrates the effectiveness of the contrast features for classification. At the end
our research B-mode ultrasound extraction of deep image features fuses various feature vectors of CEUS to realize the classification of breast tumors.
Result
2
The adopted breast ultrasound images dataset(BM-Breast) dataset is of 1 093 breast ultrasound samples that have been desensitized and preprocessed (benign: malignant = 562 ∶531). To verify the algorithm's effectiveness
this paper uses several mainstream classification algorithms for comparison and compared them with the classification accuracy of algorithms for breast ultrasound classification. The classification accuracy of the proposed fusion algorithm reaches 87.45%
and the area under curve (AUC) reaches 0.905. In the attention guidance mechanism module
this paper also conduct experiments on both a public dataset and a private dataset. The experimental results on those two datasets show that the classification accuracy has been improved by 3%
so the algorithm application is effective and robust.
Conclusion
2
Our guided attention model demonstration can learn the effective features of breast ultrasound. The fusion diagnosis of bimodal data features improves the diagnosis accuracy. This algorithm analysis promotes cooperation further between medicine and engineering via the clinical diagnosis restoration. The illustrated specificity facilitates the model capability to recognize noise samples. More accurately distinguish cases that are difficult to distinguish in clinical diagnosis. The experimental results show that the classification model algorithm in this paper has practical value.
Al-Dhabyani W, Gomaa M, Khaled H and Fahmy A. 2020. Dataset of breast ultrasound images. Data in Brief, 28: #104863[DOI: 10.1016/j.dib.2019.104863]
An S Y, Liu J, Gao Y C, Zhao X B, Hou L M and Xie T. 2012. Comparison study between qualitative analysis and quantitative analysis of contrast-enhanced ultrasound to differential diagnosis of breast masses. Chinese Journal of Ultrasonography, 21(6): 492-495
安绍宇, 刘健, 高砚春, 赵小波, 侯令密, 谢婷. 2012. 超声造影定性与定量分析鉴别乳腺肿块的对比研究. 中华超声影像学杂志, 21(6): 492-495 [DOI: 10.3760/cma.j.issn.1004-4477.2012.06.013]
Cao Z T, Yang G W, Chen Q, Chen X L and Lyu F M. 2020. Breast tumor classification through learning from noisy labeled ultrasound images. Medical Physics, 47(3): 1048-1057[DOI: 10.1002/mp.13966]
Chiao J Y, Chen K Y, Liao K Y K, Hsieh P H, Zhang G and Huang T C. 2019. Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine, 98(19): #e15200[DOI: 10.1097/MD.0000000000015200]
Fu J L, Zheng H L and Mei T. 2017. Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 4476-4484[ DOI: 10.1109/CVPR.2017.476 http://dx.doi.org/10.1109/CVPR.2017.476 ]
Guo L H, Wang D, Qian Y Y, Zheng X, Zhao C K, Li X L, Bo X W, Yue W W, Zhang Q, Shi J and Xu H X. 2018. A two-stage multi-view learning framework based computer-aided diagnosis of liver tumors with contrast enhanced ultrasound images. Clinical Hemorheology and Microcirculation, 69(3): 343-354[DOI: 10.3233/CH-170275]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 770-778[ DOI: 10.1109/CVPR.2016.90 http://dx.doi.org/10.1109/CVPR.2016.90 ]
Hemelings R, Elen B, Stalmans I, van Keer K and Blaschko M B. 2019. Artery-vein segmentation in fundus images using a fully convolutional network. Computerized Medical Imaging and Graphics, 76: #101636[DOI: 10.1016/j.compmedimag.2019.05.004]
Hu J, Shen L, Albanie S, Sun G and Wu E H. 2020. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8): 2011-2023[DOI: 10.1109/TPAMI.2019.2913372]
Krizhevsky A, Sutskever I and Hinton G E. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6): 84-90[DOI: 10.1145/3065386]
Leng X L, Huang G F, Yao L H and Ma F C. 2015. Role of multi-mode ultrasound in the diagnosis of level 4 BI-RADS breast lesions and Logistic regression model. International Journal of Clinical and Experimental Medicine, 8(9): 15889-15899
Li Y S, Liu Y, Zhang M K, Zhang G L, Wang Z L and Luo J W. 2020. Radiomics with attribute bagging for breast tumor classification using multimodal ultrasound images. Journal of Ultrasound in Medicine, 39(2): 361-371[DOI: 10.1002/jum.15115]
Mohammed M A, Al-Khateeb B, Rashid A N, Ibrahim D A, Abd Ghani M K and Mostafa S A. 2018. Neural network and multi-fractal dimension features for breast cancer classification from ultrasound images. Computers and Electrical Engineering, 70: 871-882[DOI: 10.1016/j.compeleceng.2018.01.033]
Moura D C and Guevara López M A. 2013. An evaluation of image descriptors combined with clinical data for breast cancer diagnosis. International Journal of Computer Assisted Radiology and Surgery, 8(4): 561-574[DOI: 10.1007/s11548-013-0838-2]
Pennington J, Socher R and Manning C D. 2014. Glove: global vectors for word representation//Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) Doha, Qatar: Association for computational Linguistics: 2014: 1532-1543
Qi X F, Zhang L, Chen Y, Pi Y, Chen Y, Lyu Q and Yi Z. 2019. Automated diagnosis of breast ultrasonography images using deep neural networks. Medical Image Analysis, 52: 185-198[DOI: 10.1016/j.media.2018.12.006]
Qin L K, Yin H, Zhuang H, Luo Y and Liu D C. 2019. Classification for rectal CEUS images based on combining features by transferlearning//Proceedings of the 3rd International Symposium on Image Computing and Digital Medicine. Xi'an China: ACM: 187-191[ DOI: 10.1145/3364836.3364873 http://dx.doi.org/10.1145/3364836.3364873 ]
Ronneberger O, Fischer P and Brox T. 2015. U-Net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany: Springer: 234-241[ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]
Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D and Batra D. 2017. Grad-CAM: visual explanations from deep networks via gradient-based localization//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 618-626[ DOI: 10.1109/ICCV.2017.74 http://dx.doi.org/10.1109/ICCV.2017.74 ]
Shen R X, Nian Y H and Yang L C. 2018. Research status and progress of real-time contrast-enhanced ultrasound in the diagnosis of breast tumors. Journal of Changzhi Medical College, 32(4): 314-317
沈若霞, 年英华, 杨丽春. 2018. 实时超声造影在乳腺肿瘤诊断中的研究现状及进展. 长治医学院学报, 32(4): 314-317 [DOI:10.3969/j.issn.1006-0588.2018.04.025]
Shin S Y, Lee S, Yun I D, Kim S M and Lee K M. 2019. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Transactions on Medical Imaging, 38(3): 762-774[DOI: 10.1109/TMI.2018.2872031]
Simonyan K and Zisserman A. 2014. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2021-05-15] . https:arxiv.org/pdf/1409.1556.pdf https:arxiv.org/pdf/1409.1556.pdf
Szegedy C, Vanhoucke V, Ioffe S, Shlens J and Wojna Z. 2016. Rethinking the inception architecture for computer vision//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 2818-2826[ DOI: 10.1109/CVPR.2016.308 http://dx.doi.org/10.1109/CVPR.2016.308 ]
Wang F, Jiang M Q, Qian C, Yang S, Li C, Zhang H G, Wang X G and Tang X O. 2017. Residual attention network for image classification//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE: 6450-6458[ DOI: 10.1109/CVPR.2017.683 http://dx.doi.org/10.1109/CVPR.2017.683 ]
Wang Y, Choi E J, Choi Y, Zhang H, Jin G Y and Ko S B. 2020. Breast cancer classification in automated breast ultrasound using multiview convolutional neural network with transfer learning. Ultrasound in Medicine and Biology, 46(5): 1119-1132[DOI: 10.1016/j.ultrasmedbio.2020.01.001]
Wang Y M, Fan W, Zhao S, Zhang K, Zhang L, Zhang P and Ma R. 2016. Qualitative, quantitative and combination score systems in differential diagnosis of breast lesions by contrast-enhanced ultrasound. European Journal of Radiology, 85(1): 48-54[DOI: 10.1016/j.ejrad.2015.10.017]
Woo S, Park J, Lee J Y and Kweon I S. 2018. CBAM: convolutional block attention module//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 3-19[ DOI: 10.1007/978-3-030-01234-2_1 http://dx.doi.org/10.1007/978-3-030-01234-2_1 ]
Wu K Z, Chen X and Ding M Y. 2014. Deep learning based classification of focal liver lesions with contrast-enhanced ultrasound. Optik, 125(15): 4057-4063[DOI: 10.1016/j.ijleo.2014.01.114]
相关作者
相关机构
京公网安备11010802024621