应用级联多分类器的高光谱图像分类
Hyperspectral image classification based on cascaded multi-classifiers
- 2019年24卷第11期 页码:2021-2034
收稿:2019-03-06,
修回:2019-5-8,
录用:2019-5-15,
纸质出版:2019-11-16
DOI: 10.11834/jig.190047
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-03-06,
修回:2019-5-8,
录用:2019-5-15,
纸质出版:2019-11-16
移动端阅览
目的
2
高光谱分类任务中,由于波段数量较多,图像中存在包含噪声以及各类地物样本分布不均匀等问题,导致分类精度与训练效率不能平衡,在小样本上分类精度低。因此,提出一种基于级联多分类器的高光谱图像分类方法。
方法
2
首先采用主成分分析方法将高度相关的高维特征合成无关的低维特征,以加快Gabor滤波器提取纹理特征的速度;然后使用Gabor滤波器提取图像在各个尺寸、方向上的纹理信息,每一个滤波器会生成一张特征图,在特征图中以待分类样本为中心取一个
$$ d×d $$
的邻域,计算该邻域内数据的均值和方差来作为待分类样本的空间信息,再将空间信息和光谱信息融合,以降低光线与噪声的影响;最后将谱—空联合特征输入级联多分类器中,得到预测样本关于类别的概率分布的平均值。
结果
2
实验采用Indian Pines、Pavia University和Salinas 3个数据集,与经典算法如支持向量机和卷积神经网络进行比较,并利用总体分类精度、平均分类精度和Kappa系数作为评价标准进行分析。本文方法总体分类精度在3个数据集上分别达到97.24%、99.57%和99.46%,相对于基于径向基神经网络(RBF)核函数的支持向量机方法提高了13.2%、4.8%和5.68%,相对于加入谱—空联合特征的RBF-SVM(radial basis function-support vector machine)方法提高了2.18%、0.36%和0.83%,相对于卷积神经网络方法提高了3.27%、3.2%和0.3%;Kappa系数分别是0.968 6、0.994 3和0.995 6,亦有提高。
结论
2
实验结果表明,本文方法应用于高光谱图像分类具有较优的分类效果,训练效率较高,无需依赖GPU,而且在小样本上也具有较高的分类精度。
Objective
2
Unlike conventional remote sensing images
hyperspectral images are composed of hundreds of spectral channels with extremely high spectral resolution
and each spectral channel holds an image of a specific spectral range. These spectral channels provide rich spectral information that distinguishes object species. The information provides effective technical feasibility for the analysis and processing of imaging targets
thereby enabling hyperspectral images in military
environment
mapping
agriculture
and disaster prevention. The field of hyperspectral image processing has a wide range of applications. Among them
hyperspectral image classification is one of the key tasks in the field. Hyperspectral images have numerous bands
narrow bandwidths
wide spectral response range
and high resolution. Moreover
these images provide spatial domain information
spectral domain information (i.e.
spectral image integration)
and large amounts of data but redundant information. Noise and the distribution of various types of ground objects are uneven. The deep learning method based on neural networks has become a popular trend of machine learning
and it is also the same in the classification of features of hyperspectral images. However
problems
such as the training needs of neural networks remain in the deep learning method based on neural networks. The amount of data is large
the training process requires a graphics processing unit acceleration card
neural network-based models are sensitive to hyperparameter setting
and the mobility of the models is poor. Noise in the spectral channels and imbalanced sample distribution of various ground objects usually cause many problems when classifying hyperspectral images. For example
classification accuracy and training efficiency are usually unbalanced
and the classification accuracy of small-sized samples is relatively low. To address the problems mentioned above
this study proposes a novel classification method for hyperspectral images based on cascade multiple classifiers.
Method
2
First
highly correlated high-dimensional features are converted into independent low-dimensional features by principal component analysis
which will accelerate Gabor filters for texture feature extraction in the next step. Then
Gabor filters are used to extract image texture information in multiple scales and directions. Each Gabor filter generates one feature map. In the feature map
a
$$ d $$
-by-
$$ d $$
neighborhood centered on each unclassified sample is defined
and the mean and variance within the neighborhood are considered the space information of the center unclassified sample. Spectral and space information are combined to reduce the noise influence. Finally
the spectral space combination features are input to the cascade multiple classifiers to generate the average probability distribution of each sample
w.r.t.
all ground object classes. The cascaded multi-classifier combines the methods of XGBoost
random forest
ExtraTrees
and logistic regression and fully utilizes the advantages of these different methods to construct a cascaded multi-classifier model. The classifier is a hierarchical concatenation structure
and each layer is internally a collection of multiple types of classifiers. In the model
each level of the cascade contains two XGBoost classifiers
two random forest classifiers
two ExtraTrees classifiers
and two logistic regression classifiers. Each stage in the cascade receives the feature information of the previous stage processing and outputs the processing result of the level to the next stage. The first level in the cascade is the input of the original sample
and the other layer is the input of the prediction result of the previous layer in series with the original sample. The final output of the cascade is the average of the probability distributions of the samples predicted by the multiple classifiers in the last layer of the cascade. In other words
the prediction of the input sample by multiple classifiers in each level can be regarded as abstract coding
and the combination of this code and the original sample can enrich the characteristics of the original sample. To some extent
the model increases data randomness and prevents overfitting.
Result
2
Experiments on three benchmark data sets (i.e.
Indian Pines
Pavia University
and Salinas) are conducted to evaluate the performance of the proposed method and many classical methods
such as SVM (support vector machine) and CNN (convolutional neural network). The experimental results are measured by three criteria
namely
overall classification accuracy
average classification accuracy
and kappa coefficient. The overall classification accuracies achieved by the proposed method on the three data sets are 97.24%
99.57%
and 99.46%. The proposed method yields 13.2%
4.8%
and 5.68% higher overall classification accuracy than that of SVM with RBF (radial basis function) kernel; 2.18%
0.36%
and 0.83% higher than that of the combined feature of the RBF-SVM method; and 3.27%
3.2%
and 0.3% higher than that of CNN. The average classification accuracies achieved by the proposed method on the three data sets are 93.91%
99.13%
and 99.61%. The proposed method presents 18.28%
6.21%
and 2.84% higher average classification accuracy than that of SVM with RBF kernel; and 3.99%
0.07%
and 0.58% higher than that of the combined feature of the RBF-SVM method. The kappa coefficients achieved by the proposed method on the three data sets are 0.968 6
0.994 3
and 0.995 6
which also validate the superiority of the proposed method over other methods.
Conclusion
2
Experimental results indicate that the proposed method can achieve superior classification performance on high spectral images better than classical methods such as SVM and CNN. The training efficiency of the proposed method is also relatively high compared with that of other classical methods without relying on graphics processing unit. Furthermore
the proposed method can obtain high classification accuracy on small-sized samples
Landgrebe D. Hyperspectral image data analysis[J]. IEEE Signal Processing Magazine, 2002, 19(1):17-28.[DOI:10.1109/79.974718]
Wang A R, Lu J W, Cai J F, et al. Unsupervised joint feature learning and encoding for RGB-D scene labeling[J]. IEEE Transactions on Image Processing, 2015, 24(11):4459-4473.[DOI:10.1109/TIP.2015.2465133]
Fang S, Zhu F J, Dong Z Y, et al. Sample optimized selectionof hyperspectral image classification[J]. Journal of Image and Graphics, 2018, 24(1):135-148.
方帅, 祝凤娟, 董张玉, 等.样本优化选择的高光谱图像分类[J].中国图象图形学报, 2018, 24(1):135-148. [DOI:10.11834/jig.180437]
Sutskever I, Hinton G E. Deep, narrow sigmoid belief networks are universal approximators[J]. Neural Computation, 2008, 20(11):2629-2636.[DOI; 10.1162/neco.2008.12-07-661]
Hughes G. On the mean accuracy of statistical pattern recognizers[J]. IEEE Transactions on Information Theory, 1968, 14(1):55-63.[DOI:10.1109/TIT.1968.1054102]
Camps-Valls G, Gomez-Chova L, Calpe-Maravilla J, et al. Kernel methods for HyMap imagery knowledge discovery[C]//Proceedings of Image and Signal Processing for remote Sensing Ⅸ. Barcelona, Spain: SPIE, 2004: 234-244.[ DOI: 10.1117/12.510719 http://dx.doi.org/10.1117/12.510719 ]
Roli F, Fumera G. Support vector machines for remote sensing image classification[C]//Proceedings of Image and Signal Processing for Remote Sensing Ⅵ. Barcelona, Spain: SPIE, 2001: 160-167.[ DOI: 10.1117/12.413892 http://dx.doi.org/10.1117/12.413892 ]
Shi B G, Bai X, Yao C. Script identification in the wild via discriminative convolutional neural network[J]. Pattern Recognition, 2016, 52:448-458.[DOI:10.1016/j.patcog.2015.11.005]
Gao H M, Lin S, Li C M, et al. Application of hyperspectral image classification based on overlap pooling[J]. Neural Processing Letters, 2019, 49(3):1335-1354.[DOI:10.1007/s11063-018-9876-7]
Zhao M D, Ren Z Q, Wu G C, et al.Convolutional neural networks for hyperspectral image classification[J]. Journal of Geomatics Science and Technology. 2017(5):501-507.
赵漫丹, 任治全, 吴高昌, 等.利用卷积神经网络的高光谱图像分类[J].测绘科学技术学报, 2017(5):501-507.
Fu G Y, Gu H Y, Wang H Q. Spectral and Spatial Classification of hyperspectral images based on convolutional neural networks[J]. Science Technology and Engineering, 2017, 17(21):268-274.
付光远, 辜弘炀, 汪洪桥.基于卷积神经网络的高光谱图像谱-空联合分类[J].科学技术与工程, 2017, 17(21):268-274. [DOI:10.3969/j.issn.1671-1815.2017.21.043]
Hu W, Huang Y Y, Li W, et al. Deep convolutional neural networks for hyperspectral image classification[J]. Journal of Sensors, 2015, 2015:258619.[DOI:10.1155/2015/258619]
Chen Y S, Jiang H L, Li C Y, et al. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(10):6232-6251.[DOI:10.1109/TGRS.2016.2584107]
Ma X R, Fu A Y, Wang J, et al. Hyperspectral image classification based on deep deconvolution network with skip architecture[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(8):4781-4791.[DOI:10.1109/TGRS.2018.2837142]
Yang J X, Zhao Y Q, Chan J C W. Learning and transferring deep joint spectral-spatial features for hyperspectral classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(8):4729-4742.[DOI:10.1109/TGRS.2017.2698503]
Xia J S, Falco N, Benediktsson J A, et al. Hyperspectral image classification with rotation random forest via KPCA[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 10(4):1601-1609.[DOI:10.1109/JSTARS.2016.2636877]
Fowler J E. Compressive-projection principal component analysis[J]. IEEE Transactions on Image Processing, 2009, 18(10):2230-2242.[DOI:10.1109/TIP.2009.2025089]
Tan K, Hu J, Li J, et al. A novel semi-supervised hyperspectral image classification approach based on spatial neighborhood information and classifier combination[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2015, 105:19-29.[DOI:10.1016/j.isprsjprs.2015.03.006]
Clausi D A, Jernigan M E. Designing Gabor filters for optimal texture separability[J]. Pattern Recognition, 2000, 33(11):1835-1849.[DOI:10.1016/S0031-3203(99)00181-8]
Wang S M, Dou A X, Yuan X X, et al. The airborne hyperspectral image classification based on the random forest algorithm[C]//Proceedings of 2016 IEEE International Geoscience and Remote Sensing Symposium. Beijing: IEEE, 2016: 2280-2283.[ DOI: 10.1109/IGARSS.2016.7729589 http://dx.doi.org/10.1109/IGARSS.2016.7729589 ]
Breiman L. Random forests[J]. Machine Learning, 2001, 45(1):5-32.[DOI:10.1023/A:1010933404324]
Chen T Q, Guestrin C. XGBoost: a scalable tree boosting system[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, California: ACM, 2016: 785-794.[ DOI: 10.1145/2939672.2939785 http://dx.doi.org/10.1145/2939672.2939785 ]
Mount J. The equivalence of logistic regression and maximum entropy models[EB/OL]. (2011-09-23). http://www.win-vector.com/dfiles/LogisticRegressionMaxEnt.pdf http://www.win-vector.com/dfiles/LogisticRegressionMaxEnt.pdf .
Geurts P, Ernst D, Wehenkel L. Extremely randomized trees[J]. Machine Learning, 2006, 63(1):3-42.[DOI:10.1007/s10994-006-6226-1]
Grupo de Inteligencia Computacional. Hyperspectral remote sensing scenes[EB/OL].[2016-09-01] . http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes .
Chen Y, Nasrabadi N M, Tran T D. Hyperspectral image classification using dictionary-based sparse representation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(10):3973-3985.[DOI:10.1109/TGRS.2011.2129595]
Plaza A, Benediktsson J A, Boardman J W, et al. Recent advances in techniques for hyperspectral image processing[J]. Remote Sensing of Environment, 2009, 113 Suppl1:S110-S122.[DOI:10.1016/j.rse.2007.07.028]
Cui B G, Ma X D, Xie X Y. Hyperspectral image de-noising and classification with small training samples[J]. Journal of Remote Sensing, 2017, 21(5):728-738.
崔宾阁, 马秀丹, 谢小云.小样本的高光谱图像降噪与分类[J].遥感学报, 2017, 21(5):728-738. [DOI:10.11834/jrs.20176239]
Ran Q, Yu H Y, Gao L R, et al. Superpixel and subspace projection-based support vector machines for hyperspectral image classification[J]. Journal of Image and Graphics, 2018, 23(1):95-105.
冉琼, 于浩洋, 高连如, 等.结合超像元和子空间投影支持向量机的高光谱图像分类[J].中国图象图形学报, 2018, 23(1):95-105. [DOI:10.11834/jig.170201]
相关作者
相关机构
京公网安备11010802024621