Current Issue Cover

邱云飞,王星苹,王春艳,孟令国(辽宁工程技术大学软件学院, 葫芦岛 125105)

摘 要
目的 高光谱分类任务中,由于波段数量较多,图像中存在包含噪声以及各类地物样本分布不均匀等问题,导致分类精度与训练效率不能平衡,在小样本上分类精度低。因此,提出一种基于级联多分类器的高光谱图像分类方法。方法 首先采用主成分分析方法将高度相关的高维特征合成无关的低维特征,以加快Gabor滤波器提取纹理特征的速度;然后使用Gabor滤波器提取图像在各个尺寸、方向上的纹理信息,每一个滤波器会生成一张特征图,在特征图中以待分类样本为中心取一个d×d的邻域,计算该邻域内数据的均值和方差来作为待分类样本的空间信息,再将空间信息和光谱信息融合,以降低光线与噪声的影响;最后将谱—空联合特征输入级联多分类器中,得到预测样本关于类别的概率分布的平均值。结果 实验采用Indian Pines、Pavia University和Salinas 3个数据集,与经典算法如支持向量机和卷积神经网络进行比较,并利用总体分类精度、平均分类精度和Kappa系数作为评价标准进行分析。本文方法总体分类精度在3个数据集上分别达到97.24%、99.57%和99.46%,相对于基于径向基神经网络(RBF)核函数的支持向量机方法提高了13.2%、4.8%和5.68%,相对于加入谱—空联合特征的RBF-SVM (radial basis function-support vector machine)方法提高了2.18%、0.36%和0.83%,相对于卷积神经网络方法提高了3.27%、3.2%和0.3%;Kappa系数分别是0.968 6、0.994 3和0.995 6,亦有提高。结论 实验结果表明,本文方法应用于高光谱图像分类具有较优的分类效果,训练效率较高,无需依赖GPU,而且在小样本上也具有较高的分类精度。
Hyperspectral image classification based on cascaded multi-classifiers

Qiu Yunfei,Wang Xingping,Wang Chunyan,Meng Lingguo(College of Software, Liaoning Technical University, Huludao 125105, China)

Objective Unlike conventional remote sensing images, hyperspectral images are composed of hundreds of spectral channels with extremely high spectral resolution, and each spectral channel holds an image of a specific spectral range. These spectral channels provide rich spectral information that distinguishes object species. The information provides effective technical feasibility for the analysis and processing of imaging targets, thereby enabling hyperspectral images in military, environment, mapping, agriculture, and disaster prevention. The field of hyperspectral image processing has a wide range of applications. Among them, hyperspectral image classification is one of the key tasks in the field. Hyperspectral images have numerous bands, narrow bandwidths, wide spectral response range, and high resolution. Moreover, these images provide spatial domain information, spectral domain information (i.e., spectral image integration), and large amounts of data but redundant information. Noise and the distribution of various types of ground objects are uneven. The deep learning method based on neural networks has become a popular trend of machine learning, and it is also the same in the classification of features of hyperspectral images. However, problems, such as the training needs of neural networks remain in the deep learning method based on neural networks. The amount of data is large, the training process requires a graphics processing unit acceleration card, neural network-based models are sensitive to hyperparameter setting, and the mobility of the models is poor. Noise in the spectral channels and imbalanced sample distribution of various ground objects usually cause many problems when classifying hyperspectral images. For example, classification accuracy and training efficiency are usually unbalanced, and the classification accuracy of small-sized samples is relatively low. To address the problems mentioned above, this study proposes a novel classification method for hyperspectral images based on cascade multiple classifiers. Method First, highly correlated high-dimensional features are converted into independent low-dimensional features by principal component analysis, which will accelerate Gabor filters for texture feature extraction in the next step. Then, Gabor filters are used to extract image texture information in multiple scales and directions. Each Gabor filter generates one feature map. In the feature map, a d-by-d neighborhood centered on each unclassified sample is defined, and the mean and variance within the neighborhood are considered the space information of the center unclassified sample. Spectral and space information are combined to reduce the noise influence. Finally, the spectral space combination features are input to the cascade multiple classifiers to generate the average probability distribution of each sample, w.r.t., all ground object classes. The cascaded multi-classifier combines the methods of XGBoost, random forest, ExtraTrees, and logistic regression and fully utilizes the advantages of these different methods to construct a cascaded multi-classifier model. The classifier is a hierarchical concatenation structure, and each layer is internally a collection of multiple types of classifiers. In the model, each level of the cascade contains two XGBoost classifiers, two random forest classifiers, two ExtraTrees classifiers, and two logistic regression classifiers. Each stage in the cascade receives the feature information of the previous stage processing and outputs the processing result of the level to the next stage. The first level in the cascade is the input of the original sample, and the other layer is the input of the prediction result of the previous layer in series with the original sample. The final output of the cascade is the average of the probability distributions of the samples predicted by the multiple classifiers in the last layer of the cascade. In other words, the prediction of the input sample by multiple classifiers in each level can be regarded as abstract coding, and the combination of this code and the original sample can enrich the characteristics of the original sample. To some extent, the model increases data randomness and prevents overfitting. Result Experiments on three benchmark data sets (i.e., Indian Pines, Pavia University, and Salinas) are conducted to evaluate the performance of the proposed method and many classical methods, such as SVM (support vector machine) and CNN (convolutional neural network). The experimental results are measured by three criteria, namely, overall classification accuracy, average classification accuracy, and kappa coefficient. The overall classification accuracies achieved by the proposed method on the three data sets are 97.24%, 99.57%, and 99.46%. The proposed method yields 13.2%, 4.8%, and 5.68% higher overall classification accuracy than that of SVM with RBF (radial basis function) kernel; 2.18%, 0.36%, and 0.83% higher than that of the combined feature of the RBF-SVM method; and 3.27%, 3.2%, and 0.3% higher than that of CNN. The average classification accuracies achieved by the proposed method on the three data sets are 93.91%, 99.13%, and 99.61%. The proposed method presents 18.28%, 6.21%, and 2.84% higher average classification accuracy than that of SVM with RBF kernel; and 3.99%, 0.07%, and 0.58% higher than that of the combined feature of the RBF-SVM method. The kappa coefficients achieved by the proposed method on the three data sets are 0.968 6, 0.994 3, and 0.995 6, which also validate the superiority of the proposed method over other methods. Conclusion Experimental results indicate that the proposed method can achieve superior classification performance on high spectral images better than classical methods such as SVM and CNN. The training efficiency of the proposed method is also relatively high compared with that of other classical methods without relying on graphics processing unit. Furthermore, the proposed method can obtain high classification accuracy on small-sized samples