Current Issue Cover
融合深度网络和浅层纹理特征的甲状腺结节癌变超声图像诊断

迟剑宁, 于晓升, 张艺菲(东北大学机器人科学与工程学院, 沈阳 110819)

摘 要
目的 在甲状腺结节图像中对甲状腺结节进行良恶性分析,对于甲状腺癌的早期诊断有着重要的意义。随着医疗影像学的发展,大部分的早期甲状腺结节可以在超声图像中准确地检测出来,但对于结节的性质仍然缺乏准确的判断。因此,为实现更为准确的早期甲状腺结节良恶性超声图像诊断,避免不必要的针刺或其他病理活检手术、减轻病患生理痛苦和心理压力及其医疗费用,提出一种基于深度网络和浅层纹理特征融合的甲状腺结节良恶性分类新算法。方法 本文提出的甲状腺结节分类算法由4步组成。首先对超声图像进行尺度配准、人工标记以及图像复原去除以增强图像质量。然后,对增强的图像进行数据扩展,并作为训练集对预训练过的GoogLeNet卷积神经网络进行迁移学习以提取图像中的深度特征。同时,提取图像的旋转不变性局部二值模式(LBP)特征作为图像的纹理特征。最后,将深度特征与图像的纹理特征相融合并输入至代价敏感随机森林分类器中对图像进行良恶性分类。结果 本文方法在标准的甲状腺结节癌变数据集上对甲状腺结节图像取得了正确率99.15%,敏感性99.73%,特异性95.85%以及ROC曲线下面积0.997 0的的好成绩,优于现有的甲状腺结节图像分类方法。结论 实验结果表明,图像的深度特征可以描述医疗超声图像中病灶的整体感官特征,而浅层次纹理特征则可以描述超声图像的边缘、灰度分布等特征,将二者统一的融合特征则可以更为全面地描述图像中病灶区域与非病灶区域之间的差异以及不同病灶性质之间的差异。因此,本文方法可以准确地对甲状腺结节进行分类从而避免不必要手术、减轻病患痛苦和压力。
关键词
Thyroid nodule malignantrisk detection in ultrasound image by fusing deep and texture features

Chi Jianning, Yu Xiaosheng, Zhang Yifei(College of Robot Science and Engineering, Northeastern University, Shenyang 110819, China)

Abstract
Objective Detection and analysis of thyroid nodules play vital roles in diagnosing thyroid cancer.With the development of theories and technologies in medical imaging,most of the thyroid nodules can be incidentally detected in the early stage.However,the nature of nodule lacks accurate judgement,leading to that many patients with benign nodules still need Fine Needle Aspiration (FNA) biopsies or surgeries,increasing the physical pain and mental pressure of patients as well as unnecessary medical health care costs.Therefore,we present an image-based computer-aided diagnosis (CAD) system for classifying the thyroid nodules in ultrasound images in this paper,which novelly applies the image features fused by both high-level features from deep learning network and low-level features from texture descriptor.Method Our proposed thyroid nodule classification method consists of four steps.Firstly,we pre-process the ultrasound image to enhance the image quality,including searching scale ticks,calibrating images so that the pixel-distance in each image represents the same real-world distance,removing the artifacts and restoring images so that the lesion regions in the images are not interrupted.Secondly,the enhanced images are augmented to enlarge the size of training data set and used to fine-tune the parameters of the pre-trained GoogLeNet convolutional neural network.Meanwhile the rotation invariant uniform local binary pattern (ULBP) features are extracted from each of the images as the low-level texture features.Thirdly,the high-level deep features extracted by the fine-tuned GoogLeNet neural network and the low-level ULBP features are normalized and cascaded as one fusion feature that can represent both the semantic context and the texture patterns distributed in the image.Finally,the fusion features of the images are sent to the Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases.Result The proposed classification method is applied to a standard open access thyroid nodule database to evaluate the effectiveness,achieving excellent classification performance where the accuracy,sensitivity,specificity and area of the ROC curve are 99.15%,99.73%,95.85% and 0.9970 respectively.Conclusion Experimental results indicate that the high-level features extracted by the deep neural network from the medical ultrasound image can reflect the visual features of the lesion region,while the low-level texture features can describe the edges,direction and distribution of intensities.The combination of the two types of features can describe the differences between the lesion regions and other regions,and the differences between lesions regions of malignant and benign thyroid nodules.Therefore,the proposed method can classify the thyroid nodules accurately,and provides the superior performance over most of the state-of-the-art thyroid nodule classification approaches,especially in reducing the false positive rate of the diagnosis.
Keywords

订阅号|日报