加权KNN的图文数据融合分类
Fusion method via KNN with weight adjustment for the classification of image-text co-occurrence data
- 2016年21卷第7期 页码:854-864
网络出版:2016-07-01,
纸质出版:2016
DOI: 10.11834/jig.20160703
移动端阅览

浏览全部资源
扫码关注微信
网络出版:2016-07-01,
纸质出版:2016
移动端阅览
图文数据在不同应用场景下的最佳分类方法各不相同
而现有语义级融合算法大多适用于图文数据分类方法相同的情况
若将其应用于不同分类方法时由于分类决策基准不统一导致分类结果不理想
大幅降低了融合分类性能。针对这一问题
提出基于加权KNN的融合分类方法。 首先
分别利用softmax多分类器和多分类支持向量机(SVM)实现图像和文本分类
同时利用训练数据集各类别分类精确度加权后的图像和文本正确判别实例的分类决策值分别构建图像和文本KNN模型;再分别利用其对测试实例的图像和文本分类决策值进行预测
通过最邻近个实例属于各类别的数目确定测试实例的分类概率
统一图像和文本的分类决策基准;最后利用训练数据集中图像和文本分类正确的数目确定测试实例中图像和文本分类概率的融合系数
实现统一分类决策基准下的图文数据融合。 在Attribute Discovery数据集的图像文本对上进行实验
并与基准方法进行比较
实验结果表明
本文融合算法的分类精确度高于图像和文本各自的分类精确度
且平均分类精确度相比基准方法提高了4.45%;此外
本文算法对图文信息的平均整合能力相比基准方法提高了4.19%。 本文算法将图像和文本不同分类方法的分类决策基准统一化
实现了图文数据的有效融合
具有较强的信息整合能力和较好的融合分类性能。
Most existing fusion methods are suitable in situations where the image classification method is identical with the text classification method. However
better classification methods of image and text for image-text co-occurrence data are not identical in many application scenarios. The decision benchmarks of different classification methods are not unified
which would reduce the classification precision of the fusion method. The fusion method based on KNN with weight adjustment for the classification of image-text co-occurrence data is proposed to overcome the problem. First
the softmax and multiple classification SVM are used to classify the image and text. The image's KNN model and the text's KNN model are constructed using the weighted classification decision values of image and text instances
which are correctly discriminated on a training dataset. Then
the classification decision values of the test instance are predicted by the image's KNN model and the text's KNN model. The image classification probability and the text classification probability of the test instance are determined by the number of each category in the nearest neighbors
and the classification probabilities would unify the classification decision benchmarks. Finally
the fusion coefficient is calculated by the number of image instances and the number of text instances discriminated correctly on a training dataset applied to fuse the classification probabilities of the image and text for the test instance. We performed an experiment on the Attribute Discovery dataset and compared the proposed fusion method with the baseline method. Experimental results show that the proposed fusion method achieved higher classification precision than the image classification method and text classification method
and the proposed fusion method increased the average classification precision by 4.45%. Moreover
the proposed fusion method increased the average information integration ability for image-text co-occurrence data by 4.19%. The proposed fusion method unifies the classification decision benchmarks of the different classification methods for image and text and implements the effective fusion of image-text co-occurrence data. Therefore
the proposed fusion method has a better ability to integrate information
with a better classification performance.
京公网安备11010802024621