发布时间: 2019-01-16
DOI: 10.11834/jig.180437
2019 | Volume 24 | Number 1

    NCIG 2018会议专栏    



expand article info 方帅1,2, 祝凤娟1, 董张玉1,2, 张晶1
1. 合肥工业大学计算机与信息学院, 合肥 230000;
2. 工业安全与应急技术安徽省重点实验室, 合肥 230000


目的 高光谱分类问题中,由于类内光谱特性存在差异性,导致常规的随机样本选择策略无法保证训练样本均匀覆盖样本空间。针对这一问题,提出基于类内再聚类的样本空间优化策略。同时为了进一步提高分类精度,针对低置信度分类结果,提出基于邻域高置信信息的修正策略。方法 采用FCM(fuzzy C-means)聚类算法对每类样本进行类内再聚类,在所聚的每个子类内选择适当样本。利用两个简单分类器SVM(support vector machine)和SRC(sparse representation-based classifier),对分类结果进行一致性检测,确定高、低置信区域,对低置信区域,利用主成分图作为引导图对置信度图进行滤波,使得高置信信息向低置信区域传播,从而修正低置信区域分类结果。以上策略可以保证即便在较少的训练样本的情况下,也能够训练出较高的分类器,大幅度提高分类精度。结果 使用3组实验数据,根据样本比例设置两组实验与经典以及最新分类算法进行对比。实验结果表明,本文算法均取得很大改进,尤其在样本比例较小的实验中效果显著。在小比例(一般样本选取比例的十分之一)训练样本实验中,对于India Pines数据集,OA(overall accuracy)值高达90.48%;在Salinas数据集上能达到99.68%;同样,PaviaU数据集的OA值为98.54%。3组数据集的OA值均比其他算法高出4%~6%。结论 综上表明,本文算法通过样本空间优化策略选取有代表性、均衡性的样本,保证小比例样本下分类精度依然显著;基于邻域高置信信息的修正策略起到很好的优化效果。同时,本文算法适应多种数据集,具有很好的鲁棒性。


遥感; 高光谱分类; 光谱特性; 样本空间优化; 类内再聚类; 置信区域; 边缘保护滤波

Sample optimized selection of hyperspectral image classification
expand article info Fang Shuai1,2, Zhu Fengjuan1, Dong Zhangyu1,2, Zhang Jing1
1. School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230000, China;
2. Anhui Province Key Laboratory of Industry Safety and Emergency Technology, Hefei 230000, China
Supported by: National Natural Science Foundation of China (61472380)


Objective In recent years, an increasing number of applications have been realized for remote sensing images. Hyperspectral image classification is a widely used method for hyperspectral image processing. For the traditional high-spectral classification problem, the mainstream improvement methods were compared to optimize the classifier or the classification algorithm. This approach does not address the existing limitations; thus, we proposed new improvements on the basis of this problem. In other words, while improving the classifier, the sample space was optimized to obtain a group of representative training samples to ensure the overall classification accuracy. The traditional classification method considers only the improvement of the classification algorithm and the classifier and adopts various random selection methods for sample acquisition. The spectral properties of even the same kind of substances are different. In view of the differences in the in-class spectral characteristics, the conventional random sample selection strategy for random sample selection in a certain proportion of various substances cannot guarantee that the selected training samples can contain complete spectral features. To solve this problem, a sample space optimization strategy based on in-class reclustering was proposed. In this way, the selected training samples can be guaranteed to contain various spectral curves of each class, and the selected samples are uniformly distributed in each subclass of each class. Moreover, to further improve the classification accuracy, the classifier must also be improved. According to integrated learning, the accuracy of point classification with the same classification results for multiple classifiers is higher, whereas the error rate with deviations is higher. Therefore, for the low-confidence classification result with low classification accuracy, the high-confidence region with high accuracy was optimized again. In this paper, the optimization method was a correction strategy based on neighborhood high-confidence information, which optimized the classification results of the low-confidence region by obtaining the classification results of the high-confidence region to improve the accuracy of the results of low-confidence region and the overall classification accuracy. Given that the classification strategy used in this paper is point classification, the domain information was not considered. In fact, the category information at a certain point was the same as the category information in the same region affected by the neighborhood information. Therefore, we used the edge protection filter to smooth various types of information based on edge protection to ensure the similarity between the information of a certain point and the field information and to further improve the classification accuracy. Method In this paper, fuzzy C-means clustering algorithm was used to implement class-to-class clustering on each class of samples. As the spectral characteristics of the same kind of samples are different, the same kind of samples were grouped into several subclasses according to the difference in spectral characteristics. When selecting samples, we ensured that samples were selected from each subclass of each class to ensure that the sample covered the entire sample space. For the correction strategy of the neighborhood high-confidence information, this paper used edge protection filter to optimize the low-confidence information by using the high-confidence region information. First, two simple classifiers, namely, support vector machine (SVM) classifier and sparse representation-based classifier (SRC), were used to test the consistency of the classification results of the two classifiers. The point set with consistent classification results was the high-confidence region, and the point set with inconsistent results was the low-confidence region. Then, the results of the low-confidence region were optimized by using the edge protection filter. First, the hyperspectral images were processed by principal component analysis to obtain the first principal component. Given that the first principal component contained most of the structure information of the image, the first principal component was used as the guide diagram. Then, the high-confidence region was filtered, and a small number of the low-confidence region point sets were propagated by the high-confidence information. In this way, the low-confidence region can obtain a new category information to replace the original low-confidence category information to modify the low-confidence region classification results. The edge-protecting filter has the feature of edge-preserving smoothness. The aforementioned strategies demonstrated that our classification effect was greatly improved. In addition, even when a small proportion of training samples was selected, the samples can be trained with a high classifier after the sample space was optimized to ensure the stability of the classification accuracy. Result The experiments used three sets of experimental data, namely, the India Pines, Salinas, and PaviaU datasets. Then, two sets of experiments were set up according to the different sample selection proportion to compare the classic and the proposed classification algorithm. In the first experiment, we selected 10%, 1%, and 1% training samples. The experimental results revealed that the overall accuracy (OA) values of the three datasets reached up to 98.93%, 99.78%, and 99.40% respectively, which was 1% higher than that of other optimal algorithms. In the second small-scale sample experiment, we set the sample proportions to 1%, 0.3%, and 0.4%. The OA values for the India Pines, Salinas, and PaviaU datasets reached up to 90.48%, 99.68%, and 98.54%.The OA values of the three groups of data were 4%-6% higher than that of other algorithms for the same proportions. The experimental results suggested that the proposed algorithm is superior to other algorithms, and that the classification accuracy was greatly improved, particularly in experiments with small sample proportions. Conclusion In this paper, a representative and balanced sample was selected through sample space optimization strategy to ensure that the classification accuracy remained significant for small-scale samples. The correction strategy based on neighborhood high-confidence information offered a good optimization effect. Moreover, this algorithm adapted to many datasets and achieved good robustness. In summary, the results showed that the reduction of sample proportion resulted in the rapid decline and instability of the classification effect of the traditional classification algorithm. The proposed algorithm offers obvious advantages, which ensures not only the high accuracy but also the stability of the classification results.

Key words

remote sensing; hyperspectral classification; spectral characteristic; sample space optimization; class reclustering; high confidence region; edge protection filtering

0 引言


高光谱分类是像素级的分类,在分类结果上会呈现类似噪点形式的错误标签,如图 1(b)-(c)所示,尤其在早期仅利用光谱信息而忽略空间信息的分类方法(如:稀疏表达分类(SRC)[1]、随机游走分类(ERW)[2]、SVM[3])中,这种情况尤为明显。

图 1 高光谱图像不同样本比例点分类图
Fig. 1 The classification of different sample scale points in hyperspectral images((a) pseudo-color diagram; (b) SRC classification diagram(10%); (c) SVM classification diagram(10%); (d) SRC classification diagram(1%); (e) SVM classification diagram(1%))








1) 适合小比例样本,以缩短运算时间;

2) 优化样本空间,即使样本选取比例很小时,分类结果依然很好;

3) 边缘保护滤波器的使用,能纠正低置信区域标签,同时保持原图的结构,从而得到一个精度高的分类图;

4) 高置信区域样本点的获取,得到正确度更高的分样本点,使边缘保护滤波器优势显著。

1 现象分析

1.1 问题发现

观察基于像素级高光谱分类结果,可以发现类似噪声的错误分类点。图 1(a)表示高光谱图像的伪彩色图$\mathit{\boldsymbol{I}}$图 1(b)(d)表示当训练样本取10%和1%时,$\mathit{\boldsymbol{I}}$的SRC分类图,图 1(c)(e)表示当训练样本取10%和1%时,$\mathit{\boldsymbol{I}}$的SVM分类图。因此,即使是同一类地物的分类结果也依然存在差异。同时如图 1(a)中,相邻像素点的像素值也存在差异。

1.2 问题解析

通过图 1的实验对比,提出猜想:是否同一类地物的光谱特性也存在差异。那么当选取的训练样本不能包含该类的所有光谱特性曲线时,即使是同一类的点,由于与训练样本的光谱曲线不同,也会产生分类错误。传统的分类训练样本都是随机选取的,当训练样本的数量减少时,包含的光谱曲线就会更少,那么出现错误标签的几率增大。

1.3 问题验证

为了验证猜想,本文随机选取样本来训练分类器,通过观察选取不同样本时的分类结果,来验证类内光谱差异性对分类效果的影响,实验验证如图 2所示。假设某一类地物包含了3种光谱曲线,当随机选取的样本为同一种曲线时,对应的正确分类曲线和错误分类曲线如图 2第1行所示;当随机选取的样本包含3种曲线时,对应的正确分类曲线与错误分类曲线如图 2第2行所示。以上实验验证了本文的猜想:1)类内光谱存在差异性;2)随机训练样本选择策略,不能保证训练样本的合理分布,当选取的训练样本未覆盖整个样本空间时,未覆盖的那些子类就很容易产生分类错误。因此本文利用类内样本再聚类策略来优化样本空间,保证每类训练样本的选取可以均衡地覆盖真实样本空间,从而解决以上问题。

图 2 样本选择不同时对应的正确和错误分类曲线
Fig. 2 The correct and wrong classification curves with different sample selection
((a) the correct classification curves; (b) the wrong classification curves)

2 分类算法框架


2.1 整体框架

本文主要的算法包含预处理、类内样本选择和分类实现。预处理主要通过降维去噪和本征图分解方法,用于去除噪声、阴影等与光谱特性无关的信息,得到一个光谱特性与实际更接近的高光谱图像$\mathit{\boldsymbol{R}}$。样本选择考虑类内光谱特性的差异性,提出了基于类内再聚类的策略,保证类内样本的均衡性。为了提高分类精度,在分类实现时,借鉴集成学习的思想,利用两个简单的SVM和SRC分类器实现。其中,当SVM分类器和SRC分类器分类结果相同的点作为高置信区域点,而分类结果不一致的点利用其邻域分类结果进行重新修正,这里使用边缘保护滤波器修正。边缘保护滤波器能在边缘区域保持良好的不变性,同时在平坦区域具有利用周围邻域信息来弥补以及平滑像素点的特性。因此,采用边缘保护滤波器利用邻域信息对可信度低的分类结果进行校正,从而得到一个好的分类结果。算法框架如图 3红色框区域所示。

图 3 整体框架
Fig. 3 Overall framework

2.2 预处理

2.2.1 均值降维



1) 将$N$维的HSI图像$\mathit{\boldsymbol{I}}$分为$M$组,每组长度为$\left\lceil {N/M} \right\rceil $,若最后一组不足长度值,则取剩余波段为一组。

2) 对每一组组内$\left\lceil {N/M} \right\rceil $维数据对应点相加取均值(最后一组按实际维数取均值),每组都得到一个1维图像,这样就得到了$M$维的去噪高光谱图。

2.2.2 本征图分解


$ \mathit{\boldsymbol{I}} = \mathit{\boldsymbol{S}} \cdot \mathit{\boldsymbol{R}} $ (1)


$M$维降维图的基础上,取$Z$个通道为一组,进行本征图分解,可以得到$M$维反射率图$\mathit{\boldsymbol{R}}$,及其对应的$\left\lceil {N/Z} \right\rceil $维的$\mathit{\boldsymbol{S}}$图。得到的$\mathit{\boldsymbol{R}}$图即为图 3中的$\mathit{\boldsymbol{R}}$

2.3 类内再聚类


经分析可知,高光谱图像分类错误,在一定程度上是由于同类目标的光谱特性存在差异性,而随机训练样本选择策略,不能保证训练样本的合理分布。本文充分考虑类内样本差异性,提出了基于类内样本再聚类的策略,保证每类训练样本选取可以均衡地覆盖真实样本空间。这样即使样本比例减小,分类结果依然很好,如图 3中蓝色虚线框区域。优化样本空间包括以下两部分:

1) 类内样本再聚类:将各类样本利用FCM[18]聚类为$k$个子类,如图 3中Sub_samples,$k$的选取根据每类样本数量占总样本比例选取,以$k_0$作为调节参数。对于相同的$k_0$,某类样本比例越高,$k$值越大,反之,则$k$值越小。

2) 样本均衡选取:类内聚类多个子类时,按比例选取对应子类的样本,确保样本遍布各个子类。得到训练样本Training samples。

图 4中,以Corn类为例,对比优化样本空间对分类结果的影响。对Corn类的样本进行再聚类,从图 4(b)可以看出聚为3类,分别用棕色、蓝色和橘色表示。当采用一定比例随机选取样本策略,样本选取结果如图 4(c)所示,样本点只分布在橘色子类区域,而在蓝色和棕色区域未选取样本点。但采用类内再聚类策略选取样本时,结果如图 4(d)所示,选择的样本分布在棕色、蓝色和橘色3个子类区域。图 5中亦能看出聚类后选取的样本分类效果显著。

图 4 Corn类样本分布情况
Fig. 4 The sample distribution of the Corn ((a) real graph; (b) cluster gram; (c) sample point distribution for randomized strategy; (d) sample point distribution for optimizing sample space)
图 5 corn类光谱曲线
Fig. 5 The curves of the corn ((a) all corresponding spectral curves; (b) correctly classified point spectral curves; (c) incorrectly classified point spectral curves)

图 6以Soybeans-clean till类为例,图 6(c)中选取的样本只有蓝色、棕色、浅蓝色;而图 6(d)中选取样本包含了棕色、橙色、绿色、蓝色、浅蓝色、紫色样本点。而对应的分类结果为,图 7(b)中,橙色、绿色、紫色、浅蓝色对应区域分类结果存在明显错误,而图 7(c)中对应区域只有少量错误。同样,图 8也显示出类似结果。从以上两组分析总结如下:1)基于类内聚类的优化样本空间策略能保证训练样本覆盖整个类内样本空间;2)被训练样本覆盖的区域分类精度很高,而未被覆盖的区域容易出现明显的分类错误。

图 6 Soybeans-clean till类样本分布情况
Fig. 6 The sample distribution of the Soybeans-clean till ((a) real graph; (b) cluster gram; (c) sample point distribution for randomized strategy; (d) sample point distribution for optimizing sample space)
图 7 优化样本空间前后分类图
Fig. 7 Classification diagram before and after optimization of sample space
((a) the ground truth; (b) unoptimized sample distribution (80.72%); (c) optimized sample distribution (90.10%))
图 8 Soybeans-clean till类光谱曲线
Fig. 8 The curves of the Soybeans-clean till ((a) all corresponding spectral curves; (b) correctly classified point spectral curves; (c) incorrectly classified point spectral curves)

表 1中,优化效果为1表示该类优化样本空间比未优化的精度提升,其中12个类得到提升。但仍存在部分类未被优化,分别是第9、11、12、16类。首先类别9和类别16分别对应于图 7(a)图 9(a)中Oats类与Stone-steel towers类,由于该类别只选取1个训练样本,意味着类内再聚类对该类没有起作用,分类效果改变是由样本选择随机性造成的。而11类和12类则对应于图 7(a)图 9(a)中的Soybeans-min till与Soybeans-clean till类。观察图 7(c)中红色类别,发现红色类存在大块的蓝色和黄色错误分类点,而蓝色类和黄色类中同样存在红色的错误分类点;第12类中紫色类存在错误的橙色点,而橙色类中亦有紫色点。由此可得,以上是类间相似性导致的分类错误。由于本文采用类内再聚类抽取样本,训练样本分布均匀,两类中光谱特性相似的样本都会被抽取到作为训练样本,这部分样本的参与会导致分类性能下降。这也是高光谱分类中的难点——“异物同谱”问题。而本文算法只考虑类内差异,也即“同物异谱”问题,因此对于上述现象的“异物同谱”问题,本文算法还不能很好解决。虽然存在部分类别样本空间优化后精度未提高,但优化前后效果相当。图 7(c)图 7(b)整体精度大幅度提升。总结如下:优化样本空间能提高大多数类的分类精度,从而大幅度提高整体的分类精度。

表 1 India Pines取1%样本时样本空间优化前后各子类分类精度对比表
Table 1 Comparison table of classification accuracy of each subclass before and after sample space optimization when India Pines takes 1% samples

类别 样本数量 精度 优化效果
未优化 优化
1 1 0.978 3 1.000 0 1
2 15 0.688 7 0.864 1 1
3 9 0.638 4 0.715 6 1
4 3 0.859 1 0.951 8 1
5 5 0.687 6 1.000 0 1
6 8 0.989 3 0.990 3 1
7 1 0.212 6 0.600 0 1
8 5 1.000 0 1.000 0 1
9 1 1.000 0 0.655 2 0
10 10 0.719 6 0.895 4 1
11 25 0.951 2 0.888 8 0
12 6 0.789 6 0.748 6 0
13 3 0.966 3 1.000 0 1
14 13 0.899 0 0.995 2 1
15 4 0.544 3 0.997 3 1
16 1 1.000 0 0.968 4 0
注:类别1~16对应于图 9(a)中的类别: 1(Alfalfa);2(Corn-not ill);3(Corn-min till);4(Corn);5(Grass/pasture);6(Grass/trees);7(Grass/pasture-mowed);8(Hay-windrowed);9(Oats);10(Soybeans-not ill);11(Soybeans-min till);12(Soybeans-clean till);13(Wheat);14(Woods);15(Bldg-Grass-Tree-Drives);16(Stone-steel towers)。
图 9 数据集
Fig. 9 Dataset((a) India Pines; (b) Salinas; (c) PaviaU)

2.4 分类图

为了提高分类结果,借鉴集成学习思想,采用两个简单分类器SRC和SVM。两个分类器结果相同的区域被认为高置信度区域,结果不同的区域被认为低置信度区域。对低置信度区域,依据局部一致性假设,利用其周围区域高置信度分类结果进行修正,这一修正过程是通过边缘保持滤波器实现的。整个分类过程如图 3红框部分所示。

2.4.1 高置信区域


$ \mathit{HCS}\left( {{x_1}, {x_2}} \right) = \left\{ {\begin{array}{*{20}{c}} {{x_1}}&{{x_1} = {x_2}}\\ 0&{{x_1} \ne {x_2}} \end{array}} \right. $ (2)


2.4.2 边缘保护滤波


1) 根据类别数获取多幅概率图。每幅概率图为一个二值图,0表示属于该类别的概率为0,1表示属于该类别的概率为1。

2) 将每一幅概率图进行引导滤波,将$\mathit{\boldsymbol{R}}$进行PCA[20]处理,得到的第1主成分为引导图,因为第1主成分保持了图像大部分的结构信息,这里作为引导图能保证概率图保持HSI图像的结构信息。滤波处理后的概率图变为灰度图,将概率为0的像素点变为一个0~1之间的概率值。

3) 此时得到了多幅优化后的概率图,对于原来的高置信区域样本点对应的各点或者中间像噪声一样标签为0的点,存在一个点对应多个值,此时取概率最大值对应的类别,作为该点的类别。得到一个最终的分类图$\mathit{\boldsymbol{MAP}}$

3 实验

3.1 实验数据


1) India Pines:这个场景是通过AVIRIS传感器在印度松树林测试基地获得的,地点位于印第安纳州西北部。图像的空间分辨率为20 m×20 m,场景大小为145×145像素,220个反射率波段,波长范围为0.4~2.5 nm。在作分类时,去除覆盖吸水区域(波段号为104~108、150~163、220等)的20个波段,还剩余200个波段。

2) Salinas:这个场景是在加州萨利纳斯山谷,通过AVIRIS传感器获得,具有很高的分辨率(3.7 m×3.7 m),场景的大小为512×217像素,224个反射率波段,同样的,丢弃该场景的20个覆盖吸水区域的波段(波段号为108~112、154~167、224),剩余204个波段。

3) PaviaU(Pavia University):该场景由意大利北部帕维亚1次飞行活动中通过ROSIS传感器获得,空间分辨率为1.3 m×1.3 m,场景的大小为610×340像素,波段数为103。由于场景中包含一些无意义的信息,因此需要在分析之前去除,一般选择部分场景进行分析,103表示波段数。3组数据显示如图 9所示。

3.2 实验评估标准


3.3 实验参数设置

实验用到的参数包括各数据的训练样本比例、FCM聚类分割的类别混合程度$b$、块数调节参数$k_0$、边缘保护滤波器的窗口尺寸$r$、系数$\varepsilon $、本征图分解的均值降维的维度$M$,以及分组长度$Z$

1) 训练样本个数。对于不同数据,训练样本选取的比例不同。在对比实验中,India Pines选取训练样本的比例为10%,Salinas为1%,PaviaU为1%。

2) FCM聚类分割参数。聚类各子块间样本点的混合程度$b$,这里取2。而对于聚类块数调节参数$k_0$,不同的数据选取的参数值不同,数据India Pines和Salinas设置$k_0$为100,PaviaU设置为90。

3) 边缘保护滤波器相关参数。引导滤波含有两个参数$r$$\varepsilon $,分别代表滤波尺寸和正则化参数,这两个参数的选取对滤波有很大的影响,这里选取参数$r$=3、$\varepsilon $=0.001。

4) 本征图分解相关参数。对于3组数据,设置$M$=32、$Z$=4。具体参数如表 2所示。

表 2 实验参数
Table 2 Experimental parameters

数据集 训练样本 聚类分割 IID EPFs
$b$ $k_0$ $M$ $Z$ $r$ $\varepsilon $
India Pines 10% 2 100 32 4 3 0.001
Salinas 1% 2 100 32 4 3 0.001
PaviaU 1% 2 90 32 4 3 0.001

3.4 实验结果及分析

3.4.1 对比实验

本文利用相关算法:SVM、EPFs、IID[17]以及最新算法LCMR[21]、MFASR来设置对比实验。对比相关算法以突出本算法优势;LCMR为最新的特征提取分类并使用SVM分类的算法;MFASR为最新的特征提取并使用SRC分类的算法。参数设置如表 2所示,这里的OA值取10组实验的平均值。

1) India Pines的实验结果如图 10所示。可以看出,在SVM分类结果中存在很多噪声点;EPFs相对于SVM有更好的去噪效果,但依然存在许多错误的区域;IID分类图不存在大块的错误区域,但是噪声相对明显;LCMR噪声不明显,但是有较少的错误区域;MFASR没有噪声, 但还是有少量的大块细节错误;而本文方法不仅能有效地去噪,而且能减少错误块,分类图效果最佳,精度达98.93%。

图 10 India Pines分类结果
Fig. 10 India Pines' classification results

2) Salinas的实验结果如图 11所示。亦能看出本文算法能兼顾类边缘部分和类内区域分类效果,分类效果最好。

图 11 Salinas分类结果
Fig. 11 Salinas' classification results

3) PaviaU的实验结果如图 12所示。优化效果亦如India Pines和Salinas的结果,同样本文算法效果最佳。

图 12 PaviaU分类结果
Fig. 12 PaviaU's classification results

3.4.2 小比例样本对比实验

现在的高光谱分类逐渐趋于使用少量样本分类,以减少图像分类的运算成本。将本文算法与最新的使用小比例样本的分类效果较好的分类算法进行对比,对比算法有SVM、EPFs、MFASR、IID、LC-MR、PCA-EPFs[22]。这里对India Pines数据取1%的训练样本,对Salinas数据取0.3%的训练样本,对PaviaU取0.4%的训练样本,其余参数设置见表 2, 对3个数据的分类效果如表 3表 5所示。

表 3 India Pines数据使用1%测试样本时的指标对比
Table 3 Comparison of indicators in India Pines using 1% test samples

OA 56.61 63.37 72.06 82.39 83.57 84.43 90.48
AA 58.21 52.44 70.7 82.08 88.23 85.33 89.71
Kappa 49.46 56.81 67.89 81.66 81.41 82.25 89.14

表 4 Salinas数据使用0.3%测试样本时的指标对比
Table 4 Comparison of indicators in Salinas using 0.3% test samples

OA 84.5 87.87 86.39 98.81 97.06 95.53 99.68
AA 90.06 93.12 90.85 98.84 98.16 96.22 99.44
Kappa 82.68 86.42 84.82 98.67 96.74 95.02 99.64

表 5 PaviaU数据使用0.4%测试样本时的指标对比
Table 5 Comparison of indicators for PaviaU using 0.4% test samples

OA 82.8 88.37 78.06 97.57 95.01 94.57 98.54
AA 79.95 88.89 76.44 97.73 91.68 91.64 98.6
Kappa 76.84 84.15 70.67 96.77 93.43 92.81 98.07

表 6 不同训练样本比例的操作时间
Table 6 Operation time of different training sample proportions

India Pines Salinas PaviaU
10% 1% 1% 0.30% 1% 0.40%
运算时间/s 12.22 2.17 18.09 13.95 15.35 11.83

同样地,对比每组数据正常训练样本数量与少量训练样本的运算时间,使用Intel(R)Core(TM)i5-3470 CPU @ 3.20 GHz 3.60 GHz处理器。如表 6所示,能看出样本比例选取较小时,对应数据的运行时间降低,尤其是India Pines数据。

4 结论


1) 采用集成学习思想的分类框架。利用两个简单分类器,确定高置信区域和低置信区域。采用边缘滤波对低置信区域分类结果二次优化,依据邻域高置信度区域信息,为其确定分类标签,从而提高分类精度。

2) 利用类内再聚类策略获取有代表性、均衡性的训练样本。

3) 本征图分解预处理。去除光照与物质表面形状所形成的与物质本质无关的阴影、纹理等无用空间信息,使得用于分类的光谱特征反映物质的本质特征,有利于后续分类。




  • [1] Fang L Y, Li S T, Kang X D, et al. Spectral-spatial hyperspectral image classification via multiscale adaptive sparse representation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2014, 52(12): 7738–7749. [DOI:10.1109/TGRS.2014.2318058]
  • [2] Kang X D, Li S T, Fang L Y, et al. Extended random walker-based classification of hyperspectral images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(1): 144–153. [DOI:10.1109/TGRS.2014.2319373]
  • [3] Chang C C, Lin C J. LIBSVM:a library for support vector machines[J]. ACM Transactions on Intelligent Systems and Technology, 2011, 2(3): #27. [DOI:10.1145/1961189.1961199]
  • [4] Fauvel M, Benediktsson J A, Chanussot J, et al. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles[J]. IEEE Transactions on Geoscience and Remote Sensing, 2008, 46(11): 3804–3814. [DOI:10.1109/TGRS.2008.922034]
  • [5] Fang L Y, He N J, Li S T, et al. Extinction Profiles Fusion for Hyperspectral Images Classification[J]. IEEE Transactions on Geoscience & Remote Sensing, 2018, 56(3): 1803–1815. [DOI:10.1109/TGRS.2017.2768479]
  • [6] Ghamisi P, Souza R, Benediktsson J A, et al. Extinction profiles for the classification of remote sensing data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(10): 5631–5645. [DOI:10.1109/TGRS.2016.2561842]
  • [7] Fang L Y, Wang C, Li S T, et al. Hyperspectral image classification via multiple-feature-based adaptive sparse representation[J]. IEEE Transactions on Instrumentation and Measurement, 2017, 66(7): 1646–1657. [DOI:10.1109/TIM.2017.2664480]
  • [8] Fang L Y, Li S T, Duan W H, et al. Classification of hyperspectral images by exploiting spectral-spatial information of superpixel via multiple kernels[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(12): 6663–6674. [DOI:10.1109/TGRS.2015.2445767]
  • [9] Lu T, Li S T, Fang L Y, et al. Set-to-set distance-based spectral-spatial classification of hyperspectral images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(12): 7122–7134. [DOI:10.1109/TGRS.2016.2596260]
  • [10] Li J, Bioucas-Dias J M, Plaza A. Hyperspectral image segmentation using a new Bayesian approach with active learning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(10): 3947–3960. [DOI:10.1109/TGRS.2011.2128330]
  • [11] Lou X L, Huang W G, Zhou C B, et al. A method for fast resampling of remote sensing imagery[J]. Journal of Remote Sensing, 2002, 6(2): 96–101. [楼琇林, 黄韦艮, 周长宝, 等. 遥感图像数据重采样的一种快速算法[J]. 遥感学报, 2002, 6(2): 96–101. ] [DOI:10.11834/jrs.20020204]
  • [12] Han H, Wang W Y, Mao B H. Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning[C]//Proceedings of 2005 International Conference on Advances in Intelligent Computing. Hefei, China: Springer, 2005: 878-887.[DOI: 10.1007/11538059_91]
  • [13] Cai Z W, Fan Q F, Feris R S, et al. A unified multi-scale deep convolutional neural network for fast object detection[C]//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016: 354-370.[DOI: 10.1007/978-3-319-46493-0_22]
  • [14] Sun B, Kang X D, Li S T, et al. Random-walker-based collaborative learning for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(1): 212–222. [DOI:10.1109/TGRS.2016.2604290]
  • [15] Qiao T, Ren J C, Wang Z, et al. Effective denoising and classification of hyperspectral images using curvelet transform and singular spectrum analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(1): 119–133. [DOI:10.1109/TGRS.2016.2598065]
  • [16] Shen J B, Yang X S, Li X L, et al. Intrinsic image decomposition using optimization and user scribbles[J]. IEEE Transactions on Cybernetics, 2013, 43(2): 425–436. [DOI:10.1109/TSMCB.2012.2208744]
  • [17] Kang X D, Li S T, Fang L Y, et al. Intrinsic image decomposition for feature extraction of hyperspectral images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(4): 2241–2253. [DOI:10.1109/TGRS.2014.2358615]
  • [18] Bezdek J C, Ehrlich R, Full W. FCM:the fuzzy c-means clustering algorithm[J]. Computers & Geosciences, 1984, 10(2-3): 191–203. [DOI:10.1016/0098-3004(84)90020-7]
  • [19] Li S T, Kang X D, Hu J W. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864–2875. [DOI:10.1109/TIP.2013.2244222]
  • [20] Villa A, Benediktsson J A, Chanussot J, et al. Hyperspectral image classification with independent component discriminant analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(12): 4865–4876. [DOI:10.1109/TGRS.2011.2153861]
  • [21] Fang L Y, He N J, Li S T, et al. A new spatial-spectral feature extraction method for hyperspectral images using local covariance matrix representation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(6): 3534–3546. [DOI:10.1109/TGRS.2018.2801387]
  • [22] Kang X D, Xiang X L, Li S T, et al. PCA-Based edge-preserving features for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(12): 7140–7151. [DOI:10.1109/TGRS.2017.2743102]