Current Issue Cover
面向大规模胸片图像的深度哈希检索

管安娜1, 刘骊1,2, 付晓东1,2, 刘利军1,2, 黄青松1,2(1.昆明理工大学信息工程与自动化学院, 昆明 650500;2.云南省计算机技术应用重点实验室, 昆明 650500)

摘 要
目的 医学图像检索在疾病诊断、医疗教学和辅助症状参考中发挥了重要作用,但由于医学图像类间相似度高、病灶易遗漏以及数据量较大等问题,使得现有哈希方法对病灶区域特征的关注较少,图像检索准确率较低。对此,本文以胸部X-ray图像为例,提出一种面向大规模胸片图像的深度哈希检索网络。方法 在特征学习部分,首先采用ResNet-50作为主干网络对输入图像进行特征提取得到初步特征,将该特征进行细化后获得全局特征;同时将初步特征输入构建的空间注意模块,该注意模块结合了3个描述符用于聚焦胸片图像中的显著区域,将该模块的输出进行细化得到局部特征;最后融合全局特征与局部特征用于后续哈希码优化。在哈希码优化部分,使用定义的二值交叉熵损失、对比损失和正则化损失的联合函数进行优化学习,生成高质量的哈希码用于图像检索。结果 为了验证方法的有效性,在公开的ChestX-ray8和CheXpert数据集上进行对比实验。结果显示,构建空间注意模块有助于关注病灶区域,定义特征融合模块有效避免了信息的遗漏,联合3个损失函数进行优化可以获得高质量哈希码。与当前先进的医学图像检索方法比较,本文方法能够有效提高医学图像检索的准确率,在两个数据集上的检索平均精度分别提高了约6%和5%。结论 在大规模胸片图像检索中,本文提出的深度哈希检索方法能够有效关注病灶区域,提高胸片图像检索的准确率。
关键词
A deep hash retrieval for large-scale chest radiography images

Guan Anna1, Liu Li1,2, Fu Xiaodong1,2, Liu Lijun1,2, Huang Qingsong1,2(1.Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China;2.Computer Technology Application Key Laboratory of Yunnan Province, Kunming 650500, China)

Abstract
Objective Big medical data is mainly concerned of data storage-related like electronic healthy profiles, medical image and genetic information. It is essential to process large-scale medical image data efficiently. For large-scale retrieval tasks, deep hashing methods can be used to optimize traditional retrieval methods. To improve its retrieval efficiency, the potential ability is developed to map the high-dimensional features of an image into the binary space, generate low-dimensional binary encoded features, and avoid dimensional catastrophe problem. Hash-depth methods are divided into two data categories of those are independent data and dependent data. Although the deep hashing method has great advantages in large-scale image retrieval, the challenges are still to be resolved for the features loss issues of key areas of the lesions like redundant lesions, high noise, and small targets. So, we develop a deep hash retrieval network for large-scale human chest-related X-ray images. Method For the feature learning part:to obtain their initial features, the ResNet-50 is used as the backbone network and the input image is subjected to feature extraction. To obtain global features, a feature-refined block is followed. Here, the feature-refined block is structured via the residual block and the average pooling layer. To obtain the detailed focal regions, we design a spatial attention module in related to three descriptors:1) maximum element along the channel axis, 2) average element, and 3) maximum pooling. In addition, to obtain a feature focusing on the prominent region, the key features are input into the spatial attention module, and then local features are obtained in terms of feature-refined block. First, the resulting global and local features are integrated seamlessly by dimension. Next, to optimize hash codes, the cascade layer is connected to the fully-connected layer. For the part of hash code optimization:in order to obtain high quality hash codes and improve the quality of sorting results, a joint loss function is used to define the target error. To generate a more discriminative hash code, we leverage the label information and semantic features of the image in related to the losses of contrast, regularization and cross entropy. Finally, the searching results are calculated in terms of the similarity metric. Result The comparative experiments are carried out on two different datasets of those are ChestX-ray8 and CheXpert. Our analysis is compared to other five classical generic hashing methods for the same task, including deep hashing methods and shallow hashing methods. Among them, the deep hashing methods are based on deep hashing (DH), deep supervised hashing (DSH), and attention-based triplet hashing (ATH), and the shallow hashing methods are based on semi-supervised hashing (SSH) and iterative quantization (ITQ). The normalized discounted cumulative gain (nDCG@100) and mean average precision (mAP) are used as evaluation metrics. The experimental results show that the retrieval performance of our method has some optimal value in comparison with deep learning-relevant methods. For the ChestX-ray8 dataset, the mAP is increased by about 6% and the nDCG@100 is improved by 4%. For the CheXpert database, the mAP is higher by about 5% and the nDCG@100 is improved by 3%. Conclusion To deal with the problem that the existing hashing methods pay less attention to salient region features, we demonstrate a deep hash retrieval network for large-scale chest X-ray images for large-scale human chest-relevant radiographic image retrieval. To improve the accuracy and ranking quality of image retrieval, this deep hash retrieval method is proposed and be focused on the lesion region effectively. It is beneficial to clarify focal area information and reveal the attention-less problem to salient areas in terms of a spatial attention module-constructed. The feature fusion-defined module can be used resolve the problem of information loss effectively. We use three loss functions to make the real value output more similar to the binary hash code, which can optimize the sorting quality problem of the retrieval results. It is possible to adjust the order of network composition for the regions of interest (RoI)-concerned. The loss function can be optimized the existing hashing method. It is potential to distinguish small sample images further.
Keywords

订阅号|日报