|
发布时间: 2019-03-16 |
遥感图像处理 |
|
|
收稿日期: 2018-06-28; 修回日期: 2018-08-31
第一作者简介:
焦姣, 1988年生, 女, 工程师, 博士研究生, 主要研究方向为军事信息处理、遥感图像融合。E-mail:jiaojiao_nk@163.com;
吴玲达, 女, 研究员, 主要研究方向为军事信息处理、虚拟战场环境构建、信息系统建模与仿真、多媒体与虚拟现实技术。E-mail:wld@nudt.edu.cn.
中图法分类号: TP751
文献标识码: A
文章编号: 1006-8961(2019)03-0435-012
|
摘要
目的 全色图像的空间细节信息增强和多光谱图像的光谱信息保持通常是相互矛盾的,如何能够在这对矛盾中实现最佳融合效果一直以来都是遥感图像融合领域的研究热点与难点。为了有效结合光谱信息与空间细节信息,进一步改善多光谱与全色图像的融合质量,提出一种形态学滤波和改进脉冲耦合神经网络(PCNN)的非下采样剪切波变换(NSST)域多光谱与全色图像融合方法。方法 该方法首先分别对多光谱和全色图像进行非下采样剪切波变换;对二者的低频分量采用形态学滤波和高通调制框架(HPM)进行融合,将全色图像低频子带的细节信息注入到多光谱图像低频子带中得到融合后的低频子带;对二者的高频分量则采用改进脉冲耦合神经网络的方法进行融合,进一步增强融合图像中的空间细节信息;最后通过NSST逆变换得到融合图像。结果 仿真实验表明,本文方法得到的融合图像细节信息清晰且光谱保真度高,视觉效果上优势明显,且各项评价指标与其他方法相比整体上较优。相比于5种方法中3组融合结果各指标平均值中的最优值,清晰度和空间频率分别比NSCT-PCNN方法提高0.5%和1.0%,光谱扭曲度比NSST-PCNN方法降低4.2%,相关系数比NSST-PCNN方法提高1.4%,信息熵仅比NSST-PCNN方法低0.08%。相关系数和光谱扭曲度两项指标的评价结果表明本文方法相比于其他5种方法能够更好地保持光谱信息,清晰度和空间频率两项指标的评价结果则展示了本文方法具有优于其他对比方法的空间细节注入能力,信息熵指标虽不是最优值,但与最优值非常接近。结论 分析视觉效果及各项客观评价指标可以看出,本文方法在提高融合图像空间分辨率的同时,很好地保持了光谱信息。综合来看,本文方法在主观与客观方面均具有优于亮度色调饱和度(IHS)法、主成分分析(PCA)法、基于非负矩阵分解(CNMF)、基于非下采样轮廓波变换和脉冲耦合神经网络(NSCT-PCNN)以及基于非下采样剪切波变换和脉冲耦合神经网络(NSST-PCNN)5种经典及现有流行方法的融合效果。
关键词
多光谱与全色图像融合; 非下采样剪切波变换; 形态学滤波; 高通调制; 脉冲耦合神经网络
Abstract
Objective Various remote sensing sensors presently exist, and multisource remote sensing images, such as multispectral (MS) and panchromatic (PAN) images, can be acquired. MS images, which have rich spectral information and low spatial resolution, cannot meet the remote sensing application demand. Correspondingly, PAN images have more spatial details and higher spatial resolutions. The significance of MS and PAN image fusion is that it improves the spatial resolution of MS images while maintaining original spectral information. It also combines target shape and the structural characteristics of PAN images and the spectral information of MS images to provide great interpretation capability and reliable results, as well as enhances the classification and identification precision of objects. However, the spatial resolution enhancement of PAN images and the spectral information maintenance of MS images are usually contradictory. How to acquire a high fusion performance in the contradictions has always been a popular and difficult point in the research field of remote sensing image fusion and has an extensive prospect in research and application. In this study, a fusion method based on morphological filter and improved pulse-coupled neural network (PCNN) in a non-subsampled shearlet transform (NSST) domain is proposed to improve the fusion quality of MS and PAN images by combining spectral information with spatial details efficiently. Method The proposed method is conducted on MS and PAN images that have been accurately registered. First, the PAN and MS images are decomposed by NSST to obtain low- and high-frequency sub-band coefficients. Second, the low-frequency sub-bands, which are approximate sub-graphs of the original image and inherit the overall characteristics, still have some edges and detailed information. The fusion rule of low-frequency coefficients based on morphological filtering and high-pass modulation (HPM) scheme is proposed. The morphological half-gradient operator is used to extract the details of the low-frequency sub-bands of the PAN image owing to its preliminary encouraging fusion results on remote sensing images. The low-resolution PAN sub-band image can be obtained by morphological filtering, and the detailed PAN sub-band image is estimated by subtracting the low-resolution PAN sub-band image from the PAN sub-band image equalized with histogram on the basis of the MS sub-band image. The spatial details are then injected into the low-frequency sub-band of the MS image through the HPM scheme. For the fusion of high-frequency sub-bands, an improved PCNN is taken to enhance spatial detail information. Existing PCNN models usually adopt a hard-limiting function as output, and the firing output is 0 or 1, which cannot reflect the amplitude difference of the synchronous pulse excitation efficiently. At a point, a soft-limiting sigmoid function is adopted to calculate the firing output amplitude during the iterations, and the decision matrix for high-frequency coefficient selection can be achieved by summing up the firing output amplitude in the iterative process. Then, the fusion low- and high-frequency coefficients are reconstructed with the inverse NSST to obtain the final fusion image. Result A series of simulation experiments is conducted to verify the superiority and validity of the proposed fusion method. Three groups of QuickBird remote sensing images are utilized to test the proposed method. The performance evaluation of the fusion methods includes the subjective visual effect and objective standard evaluation. Visual analysis is the most immediate detection method. Five objective evaluation indicators, namely, image clarity, information entropy, correlation coefficient, spatial frequency, and spectral distortion, are selected to evaluate the fusion results quantitatively and objectively. Experimental results show that the proposed method has obvious advantages in the fusion effect. The subjective visual effect of the proposed method is obviously better than those of the other five methods. Details such as image textures and edges are clear, and the spectral information is maintained efficiently. Compared with the other fusion methods, the proposed method also has great superiority on the objective evaluation indicators. The average values of the five indicators for three bands are calculated, four of which are the best among the comparison methods. The average value of three groups of images are also calculated. Compared with the best indicator of the other five methods, the image clarity and spatial frequency of our method are improved by 0.5% and 1.0%, respectively, compared with the NSCT-PCNN method. Our spectral distortion is 4.2% lower than that of the NSST-PCNN method. Our correlation coefficient is 1.4% higher than that of NSST-PCNN, and the information entropy is only 0.08% lower than the best value from NSST-PCNN. The results of the correlation coefficient and spectral distortion demonstrate that the proposed method maintains better spectral information than do the other five methods. Results of the image clarity and spatial frequency show that the proposed method has an excellent capability of detailed information injection, and only the image clarity of B band in group 2 is poor. The information entropy is approximate to the best result. Conclusion A fusion method of MS and PAN images based on morphological operator and improved PCNN in NSST domain is proposed. We present the fusion rules for different frequency bands according to the NSST decomposition of the original MS and PAN images. Low-frequency coefficient fusion rule based on morphological half-gradient filtering and the HPM scheme and high-frequency coefficient fusion rule based on the improved PCNN are designed. A real satellite dataset is employed for the performance evaluation of the proposed method. The analysis indicates that our method can improve the spatial resolution and maintain the spectral information of fusion results. In general, the proposed method is superior to the traditional methods and some current popular fusion methods from the overall effect of visual aspects and objective indicators.
Key words
multispectral and panchromatic images fusion; non-subsampled shearlet transform; morphological filter; high-pass modulation; pulse coupled neural network
0 引言
遥感图像融合是图像融合领域的一个重要分支,是指利用相关的技术手段将取自同一区域中的多源遥感图像进行整合的过程,而针对多光谱与全色图像的融合是当前遥感图像处理领域的研究热点之一。多光谱图像光谱信息丰富,具有较高的光谱分辨率,但空间分辨率较低,而全色图像可以反映更多的空间细节信息,具有较高的空间分辨率。二者融合的意义在于:能够在提高多光谱图像空间分辨率的同时保持光谱信息;结合全色图像中地物的形状、结构信息以及多光谱图像中的光谱信息,提供更强的解译能力和更加可靠的结果;提高地物的分类精度和目标的检测精度。当前随着多光谱遥感技术的快速发展,多光谱图像在灾情监测评估、地质资源勘探、军事目标检测与识别等领域发挥了重要作用。
传统的多光谱与全色图像的融合方法主要包括亮度色调饱和度(IHS)法[1],主成分分析(PCA)法[2]以及正交变换(GS)法[3]等分量替代方法。这些方法能够有效提高融合图像的空间分辨率,但存在较为严重的频谱失真问题。近年来,基于多分辨率分析的方法被广泛应用于多光谱图像融合领域,频谱失真问题得到了有效解决。多分辨率分析方法符合人眼视觉从粗到细的特性,且可以有效捕获图像的细节信息。其中小波变换是较为典型的多分辨率方法,能够得到更好的多光谱图像融合效果[4],但是由于小波变换的方向局限性使其不能有效地表示2维图像信号,为了寻找高维函数的最优表示方法,人们相继提出了许多新的多尺度几何分析方法。其中,非下采样剪切波变换(NSST)基于剪切波变换改进而来,具有良好的图像稀疏表示性能及低计算成本的特点,更适合于多光谱与全色图像的融合[5]。NSST的出现为多光谱图像融合问题提供了新的思路。
脉冲耦合神经网络(PCNN)是第3代人工神经网络,具有空间邻近与特征相似聚集的特点,不通过训练和学习就可以从复杂背景中提取有效信息。应用于遥感图像融合,与多尺度几何分析相结合,可以更好地发挥其潜力,得到更好的融合效果。但PCNN模型应用于多光谱图像融合依然存在许多问题,还需进一步改进。
形态学算子在许多图像处理应用中已展示了其有效性,如图像编码[6]、医学图像融合[7]等,到目前为止,关于遥感图像融合只有Laporterie等人[8]所做的一些初步的研究工作,以及Bejinariu等人[9]、Restaino等人[10]关于形态学信号分解应用于多光谱图像融合问题的研究工作。形态学方法具有很高的计算效率,能够有效提高融合图像的空间分辨率,可以进一步挖掘其在多光谱图像融合中的潜力。
为了有效融合全色图像的空间细节信息和多光谱图像的光谱信息,得到空间细节信息丰富、光谱保真度高的融合图像,本文提出了一种形态学滤波与改进脉冲耦合神经网络(PCNN)的NSST域多光谱与全色图像融合方法。首先分别对多光谱图像和全色图像进行NSST变换,在NSST域中,高频子带系数利用改进的PCNN进行融合,充分利用PCNN的优势,很好地保留细节信息以及光谱信息,低频子带系数采用形态学滤波和高通调制框架(HPM)进行融合,将全色图像低频子带的细节信息注入到多光谱图像低频子带中得到融合后的低频子带。最后通过NSST逆变换得到融合图像。将本文方法与亮度色调饱和度(IHS)法、主成分分析(PCA)法、基于非负矩阵分解(CNMF)、基于非下采样轮廓波变换和脉冲耦合神经网络(NSCT-PCNN)以及基于非下采样剪切波变换和脉冲耦合神经网络(NSST-PCNN)5种融合方法进行比较,验证本文方法的有效性和优越性。
1 NSST、形态学算子及PCNN
1.1 非下采样剪切波变换(NSST)
Guo等人[11]于2007年通过仿射系统将几何和多尺度结合构造了剪切波,它具有各向异性、结构简单、计算效率高、无方向数和支撑基尺寸限制等优点。当维数
$ \begin{array}{*{20}{c}} {{\psi _{AB}}\left( \psi \right) = \left\{ {{\psi _{j,k,l}}\left( x \right) = } \right.}\\ {\left. {{{\left| {\det \mathit{\boldsymbol{A}}} \right|}^{j/2}}\psi \left( {{\mathit{\boldsymbol{B}}^l}{\mathit{\boldsymbol{A}}^j}x - k} \right),j,l \in {\bf{Z}},k \in {{\bf{Z}}^2}} \right\}} \end{array} $ | (1) |
式中,
若
$ \sum\limits_{j,k,l} {{{\left| {\left\langle {f,{\psi _{j,k,l}}} \right\rangle } \right|}^2}} = {\left\| f \right\|^2} $ | (2) |
则
$ \mathit{\boldsymbol{A}} = \left[ {\begin{array}{*{20}{c}} a&0\\ 0&{{a^{1/2}}} \end{array}} \right],\mathit{\boldsymbol{B}} = \left[ {\begin{array}{*{20}{c}} 1&s\\ 0&1 \end{array}} \right] $ |
此时的合成小波即为剪切波,通常取
如图 1所示,经过频率分解,剪切波的几何性质更为直观。对于不同的尺度,
为了使剪切波变换具备平移不变特性,Easley等人[13]提出了非下采样剪切波变换(NSST)。NSST作为小波变换在多维方向的自然扩展,具有类似于小波变换的严格的数学基础,继承了曲波变换及轮廓波变换的优点,其基函数的楔形支撑空间具有可变性,可通过剪切和膨胀自适应对图像中的几何边缘进行表示,对于2维图像具有最优的逼近特性,是图像中边缘等特征的真正的稀疏表示。
NSST的离散化过程主要分为两个步骤:多尺度分解与方向局部化。多尺度分解通过非下采样金字塔滤波器组(NSP)来实现,每一级需要对上级中采用的滤波器组按照矩阵
方向局部化则通过剪切滤波器(SF)来实现。对某尺度的子带图像进行
1.2 形态学算子
在图像处理中,形态学是借助集合论的语言来描述的,是以形态结构元素为基础进行图像分析的数学工具。主要思想是利用具有一定形态的结构元素从图像中提取对表达和描绘区域形状有意义的图像分量, 从而达到图像分析与识别等目的。目前,形态学已广泛应用于图像去噪、图像增强、边缘提取、图像分割等领域。
1.2.1 膨胀运算与腐蚀运算
令
$ \begin{array}{*{20}{c}} {\mathit{\boldsymbol{F}} \oplus \mathit{\boldsymbol{B}}\left( {x,y} \right) = \mathop {\max }\limits_{\left( {i,j} \right)} \left\{ {\left. {\mathit{\boldsymbol{F}}\left( {x - i,y - j} \right) + \mathit{\boldsymbol{B}}\left( {i,j} \right)} \right|} \right.}\\ {\left. {\left( {x - i,y - j} \right) \in {\mathit{\boldsymbol{D}}_F},\left( {i,j} \right) \in {\mathit{\boldsymbol{D}}_B}} \right\}} \end{array} $ | (3) |
式中,
利用
$ \begin{array}{*{20}{c}} {\mathit{\boldsymbol{F}} \odot \mathit{\boldsymbol{B}}\left( {x,y} \right) = \mathop {\min }\limits_{\left( {i,j} \right)} \left\{ {\left. {\mathit{\boldsymbol{F}}\left( {x + i,y + j} \right) - \mathit{\boldsymbol{B}}\left( {i,j} \right)} \right|} \right.}\\ {\left. {\left( {x + i,y + j} \right) \in {\mathit{\boldsymbol{D}}_F},\left( {i,j} \right) \in {\mathit{\boldsymbol{D}}_B}} \right\}} \end{array} $ | (4) |
计算过程相当于让结构元素
在结构元素
1.2.2 开运算和闭运算
可以在灰度腐蚀和膨胀的基础上定义灰度图像的开运算和闭运算,灰度开运算为先进行灰度腐蚀后再灰度膨胀,灰度闭运算为先进行灰度膨胀后再灰度腐蚀,下面分别给出定义:
使用结构元素
$ \mathit{\boldsymbol{F}} \circ \mathit{\boldsymbol{B}} = \left( {\mathit{\boldsymbol{F}} \odot \mathit{\boldsymbol{B}}} \right) \oplus \mathit{\boldsymbol{B}} $ | (5) |
使用结构元素
$ \mathit{\boldsymbol{F}} \bullet \mathit{\boldsymbol{B}} = \left( {\mathit{\boldsymbol{F}} \oplus \mathit{\boldsymbol{B}}} \right) \odot \mathit{\boldsymbol{B}} $ | (6) |
开运算常用于去除相对于结构元素
1.3 脉冲耦合神经网络(PCNN)
在图像处理中,PCNN受生物视觉皮层模型启发而产生,它是一种反馈型网络,由若干神经元相互链接而构成,每个神经元包括接收域、调制域和脉冲发生器3个部分。由于原始PCNN模型结构比较复杂,用于图像处理时所需调整的参数较多,因而采用如图 2所示的简化的PCNN模型[14],其表达式为
$ \left\{ \begin{array}{l} {F_{ij}}\left( n \right) = {D_{ij}}\\ {L_{ij}}\left( n \right) = {L_{ij}}\left( {n - 1} \right) \times \exp \left( { - {\alpha _L}} \right) + \\ \;\;\;\;\;\;\;\;\;\;\;\;{V_L}\sum\limits_{pq} {{\omega _{ij,pq}}{Y_{pq}}\left( {n - 1} \right)} \\ {U_{ij}}\left( n \right) = {F_{ij}}\left( n \right)\left( {1 + \beta {L_{ij}}\left( n \right)} \right)\\ {\theta _{ij}}\left( n \right) = {\theta _{ij}}\left( {n - 1} \right) \times \exp \left( { - {\alpha _\theta }} \right) + \\ \;\;\;\;\;\;\;\;\;\;\;{V_\theta }{Y_{ij}}\left( {n - 1} \right)\\ {Y_{ij}}\left( n \right) = {\mathop{\rm sgn}} \left( {{U_{ij}}\left( n \right) - {\theta _{ij}}\left( n \right)} \right) \end{array} \right. $ | (7) |
式中,
2 低频与高频分量的融合规则
对于多光谱图像与全色图像而言,影响二者融合质量的主要因素一方面是空间细节信息的融入度,另一方面是光谱信息的保持度。经过NSST变换,源图像被分解为不同的频率分量,高频分量部分存储了图像中丰富的纹理、边缘等细节信息,是图像突变特性的反映,高频系数的选择对于空间细节信息的保留具有十分重要的作用,而低频分量部分是图像的近似子图,决定着图像的轮廓,低频系数的选择对于提高融合图像的视觉效果,很好地保持原始图像的光谱信息具有非常重要的意义。针对不同频率域需要保留的显著特征,对相应的高、低频分量的融合规则进行设计。
2.1 基于形态学滤波和高通调制的低频分量融合规则
低频子带部分作为原始图像的近似子图,继承了原始图像的整体特性且集中了绝大部分的能量,但仍然有一些边缘和细节信息留存,本文采用一种基于形态学滤波和高通调制的细节信息注入方法对低频分量进行融合。
采用半梯度算子[15]对全色图像的低频子带图像进行细节提取,其表达式为
$ \begin{array}{*{20}{c}} {{{\mathit{\boldsymbol{\bar \psi }}}_{{\rm{HG}},B}} = 0.5\left( {{\mathit{\boldsymbol{\rho }}^ - } - {\mathit{\boldsymbol{\rho }}^ + }} \right) = }\\ {0.5\left( {\mathit{\boldsymbol{F}} - {\varepsilon _B}\left( \mathit{\boldsymbol{F}} \right)} \right) - 0.5\left( {{\delta _B}\left( \mathit{\boldsymbol{F}} \right) - \mathit{\boldsymbol{F}}} \right)} \end{array} $ | (8) |
式中,
$ \begin{array}{*{20}{c}} {{\mathit{\boldsymbol{\psi }}_{{\rm{HG}},B}} = \mathit{\boldsymbol{F}} - {{\mathit{\boldsymbol{\bar \psi }}}_{{\rm{HG}},B}} = }\\ {\mathit{\boldsymbol{F}} - \left[ {0.5\left( {\mathit{\boldsymbol{F}} - {\varepsilon _B}\left( \mathit{\boldsymbol{F}} \right)} \right) - 0.5\left( {{\delta _B}\left( \mathit{\boldsymbol{F}} \right) - \mathit{\boldsymbol{F}}} \right)} \right] = }\\ {0.5\left( {{\varepsilon _B}\left( \mathit{\boldsymbol{F}} \right) + {\delta _B}\left( \mathit{\boldsymbol{F}} \right)} \right)} \end{array} $ | (9) |
即等同于膨胀与腐蚀结果相加的0.5倍和,根据膨胀与腐蚀的计算公式,
通过形态学滤波得到全色图像的低分辨率低频子带图像
$ {\widehat {\mathit{\boldsymbol{MS}}}_k} = {\widetilde {\mathit{\boldsymbol{MS}}}_k} + {\widetilde {\mathit{\boldsymbol{MS}}}_k}\frac{{\mathit{\boldsymbol{P}}_k^{\rm{0}} - \mathit{\boldsymbol{P}}_k^{{\rm{low}}}}}{{\mathit{\boldsymbol{P}}_k^{{\rm{low}}}}} $ | (10) |
式中,
这一表达式与全色图像的局部对比度密切相关[16],根据Weber关于对比度的定义[17],
$ {\widehat {\mathit{\boldsymbol{MS}}}_k} = {\widetilde {\mathit{\boldsymbol{MS}}}_k}\left( {1 + {C_{\rm{W}}}} \right) $ | (11) |
基于形态学算子的低频分量融合步骤为:
1) 将全色图像在NSST域的低频子带图像依据多光谱图像在NSST域的低频子带图像进行直方图均衡化处理,得到图像
2) 设置结构元素
3) 根据式(9)计算半梯度算子形态学滤波结果,得到全色图像的低分辨率低频子带图像
4) 根据低频分量融合式(10)计算低频子带系数的融合结果。
2.2 基于改进PCNN的高频分量融合规则
本文采用简化的PCNN模型对NSST分解得到的高频子带系数进行融合。采用PCNN模型可有效提取图像中的边缘以及纹理等细节信息,点火次数的多少则对应于图像相应位置的信息丰富程度。
现有的基于PCNN模型的融合算法,大都将PCNN模型输出中的点火次数作为判决准则进行融合系数的选取,通常采用硬限幅函数,点火输出非0即1,不能很好地反映同步脉冲激发的幅度差异。因此在简化的PCNN模型基础上,对输出部分进行改进,采用一个软限幅Sigmoid函数[18]对各子带系数迭代过程中的点火输出幅度进行计算,将输出幅度和作为系数选择的判决依据。输出幅度为
$ {T_{ij}}\left( n \right) = \frac{1}{{1 + {{\rm{e}}^{{\theta _{ij}}\left( n \right) - {U_{ij}}\left( n \right)}}}} $ | (12) |
式中,
$ {Z_{ij}}\left( n \right) = {Z_{ij}}\left( {n - 1} \right) + {T_{ij}}\left( n \right) $ | (13) |
基于改进PCNN的高频分量融合步骤为:
1) 采用3×3像素的区域滑动窗口分别对全色及多光谱图像进行遍历,计算NSST域中的高频分量对应的
2) 设置最大迭代次数
3) 根据式(7)进行模型的迭代计算,得到所有的中间结果,并计算出模型点火输出幅度的总和;
4) 当迭代次数
5) 根据模型点火输出幅度计算得到决策矩阵
$ H\left( {i,j} \right) = \left\{ {\begin{array}{*{20}{c}} \begin{array}{l} 1\\ 0 \end{array}&\begin{array}{l} Z_{ij}^{\rm{M}}\left( {{N_{\max }}} \right) > Z_{ij}^{\rm{P}}\left( {{N_{\max }}} \right)\\ 其他 \end{array} \end{array}} \right. $ |
基于决策矩阵完成融合结果图像中高频系数的选取,即
$ {S_{\rm{F}}}\left( {i,j} \right) = \left\{ {\begin{array}{*{20}{c}} \begin{array}{l} {S_{\rm{M}}}\left( {i,j} \right)\\ {S_{\rm{P}}}\left( {i,j} \right) \end{array}&\begin{array}{l} H\left( {i,j} \right) = 1\\ H\left( {i,j} \right) = 0 \end{array} \end{array}} \right. $ |
式中,
2.3 本文方法的融合步骤和流程图
针对全色图像与多光谱图像融合的特点,根据形态学滤波、PCNN以及NSST各自优势进行融合策略的设计,提出了一种结合形态学滤波和改进PCNN的NSST域多光谱与全色图像融合方法。方法流程如图 3所示,具体的步骤如下:
1) 对全色图像与多光谱图像进行精确配准,本文算法中的待融合图像均已完成精确配准;
2) 利用NSST变换对全色图像和多光谱图像进行多尺度多方向分解,得到全色图像的低频与高频子带系数
3) 针对全色图像与多光谱图像经NSST分解得到的低频部分,结合形态学滤波方法和高通调制框架将全色图像低频子带的细节信息注入到多光谱图像低频子带中,得到融合后的低频子带系数
4) 利用改进的PCNN方法对全色图像与多光谱图像经NSST分解得到的高频部分进行融合,得到融合后的高频子带系数
5) 对融合后的低频系数
3 实验结果及分析
3.1 实验数据及参数设置
选取3组图像数据进行实验分析,所用数据集均已完成配准。如图 4所示,3组图像来源于QuickBird卫星,于2002年11月21日拍摄于印度孙德尔本斯国家森林公园,所得全色图像与多光谱图像的大小分别为1 024×1 024像素和256×256像素,分辨率分别为2.44 m、0.61 m。由于缺少用于融合性能评估的参考图像,通过对原始多光谱及全色图像进行降采样因子为4的低通滤波降采样处理得到用于融合的图像数据,实验中,图像大小变为256×256像素和64×64像素,将原始多光谱图像作为参考图像。实验参数选取如下:NSST中,多尺度分解滤波器选取“maxflat”,尺度分解层数为3,方向数分别为{4,8,8};改进PCNN中,
3.2 融合方法性能评价
关于多光谱与全色图像融合方法的性能评价,分为主观和客观两类。其中视觉分析是最为直接的检测手段,同时为了对图像的融合效果进行客观的定量评价,本文选取清晰度、信息熵、相关系数、空间频率和光谱扭曲度[22] 5种客观评价指标。
1) 清晰度,即平均梯度,反映了融合图像中的纹理和细节特征的变化程度,定义为
$ \begin{array}{*{20}{c}} {AG = \frac{1}{{M \times N}}\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {} } }\\ {\sqrt {\begin{array}{*{20}{c}} {\frac{1}{2}\left( {{{\left( {F\left( {i,j} \right) - F\left( {i + 1,j} \right)} \right)}^2} + } \right.}\\ {\left. {{{\left( {F\left( {i,j} \right) - F\left( {i,j + 1} \right)} \right)}^2}} \right)} \end{array}} } \end{array} $ | (14) |
2) 信息熵,图像的信息熵用来衡量图像包含信息的丰富程度,定义为
$ EN = - \sum\limits_{i = 0}^{L - 1} {{p_F}\left( i \right){{\log }_2}\left( {{p_F}\left( i \right)} \right)} $ | (15) |
式中,
3) 相关系数,反映了融合图像与多光谱图像之间的相关程度。定义为
$ CC = \frac{{\begin{array}{*{20}{c}} {\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {\left[ {\left( {F\left( {m,n} \right) - \bar F} \right) \times } \right.} } }\\ {\left. {\left( {MS\left( {m,n} \right) - \overline {MS} } \right)} \right]} \end{array}}}{{\sqrt {\begin{array}{l} \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{\left( {F\left( {m,n} \right) - \bar F} \right)}^2} \times } } \\ \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{\left( {MS\left( {m,n} \right) - \overline {MS} } \right)}^2}} } \end{array} }}} $ | (16) |
式中,
4) 空间频率,用来度量图像空间域细节信息的丰富程度,定义为
$ {F_{\rm{S}}} = \sqrt {{F_{\rm{R}}}^2 + {F_{\rm{C}}}^2} $ | (17) |
式中,
$ {F_{\rm{R}}} = \sqrt {\frac{1}{{M \times N}}\sum\limits_{i = 1}^M {\sum\limits_{j = 2}^N {{{\left[ {F\left( {i,j} \right) - F\left( {i,j - 1} \right)} \right]}^2}} } } $ |
$ {F_{\rm{C}}} = \sqrt {\frac{1}{{M \times N}}\sum\limits_{i = 2}^M {\sum\limits_{j = 1}^N {{{\left[ {F\left( {i,j} \right) - F\left( {i - 1,j} \right)} \right]}^2}} } } $ |
5) 光谱扭曲度,反映了融合图像与多光谱图像之间的光谱扭曲程度。定义为
$ SD = \frac{1}{{M \times N}}\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {\left| {F\left( {i,j} \right) - MS\left( {i,j} \right)} \right|} } $ | (18) |
3.3 实验结果
图 5为3组多光谱与全色图像利用上述5种融合方法及本文方法获得的融合结果。从主观视觉效果来看,基于IHS方法的融合结果具有严重的光谱失真和空间信息缺失,如图 5(a)所示。图 5(b)(c)分别为基于PCA变换和CNMF方法的融合结果,这两种方法对于融合图像的细节信息有所加强,但依然存在较为严重的光谱扭曲。图 5(d)(e)分别为基于NSCT-PCNN方法和NSST-PCNN方法得到的融合图像,这两种方法较好地保持了多光谱图像的光谱信息,图像纹理、边缘等细节信息也较为清晰。图 5(f)为本文方法得到的融合结果,视觉效果上明显优于其他5种方法,细节信息清晰且光谱保真度高。
表 1、表 2和表 3分别给出了6种图像融合方法对于3组图像数据的定量评价结果。IHS变换方法光谱扭曲严重,空间分辨率也很差。PCA和CNMF方法空间分辨率和清晰度较优,具有优于IHS方法捕获图像细节的能力,但光谱扭曲度依然较高。NSCT-PCNN方法以空间频率
表 1
第1组图像的6种融合方法定量评价结果
Table 1
Quantitative comparison results of six methods for the group 1 images
融合方法 | 波段 | 清晰度 | 信息熵 | 相关系数 | 空间频率 | 光谱扭曲度 |
IHS | R | 4.224 8 | 6.674 0 | 0.886 8 | 8.709 9 | 57.444 1 |
G | 4.073 3 | 6.170 9 | 0.882 0 | 8.401 5 | 57.218 7 | |
B | 4.085 9 | 6.114 2 | 0.889 2 | 8.468 8 | 57.174 8 | |
平均 | 4.128 0 | 6.319 7 | 0.886 0 | 8.526 7 | 57.279 2 | |
PCA | R | 6.748 9 | 7.053 0 | 0.866 3 | 14.514 2 | 19.536 2 |
G | 5.685 4 | 6.873 8 | 0.838 3 | 12.186 1 | 17.974 2 | |
B | 5.652 6 | 6.838 2 | 0.846 7 | 12.172 9 | 17.670 5 | |
平均 | 6.029 0 | 6.921 7 | 0.850 4 | 12.957 7 | 18.393 6 | |
CNMF | R | 6.560 7 | 7.411 6 | 0.897 0 | 13.883 1 | 38.554 7 |
G | 6.281 2 | 7.274 3 | 0.844 0 | 13.156 5 | 35.504 9 | |
B | 6.244 9 | 7.265 2 | 0.846 0 | 13.253 9 | 32.892 9 | |
平均 | 6.362 3 | 7.317 0 | 0.862 3 | 13.431 1 | 35.650 8 | |
NSCT-PCNN | R | 8.548 8 | 7.438 6 | 0.949 5 | 17.983 0 | 11.011 1 |
G | 8.551 4 | 7.257 8 | 0.917 1 | 17.830 8 | 11.997 9 | |
B | 8.543 2 | 7.248 1 | 0.914 4 | 17.879 1 | 12.425 1 | |
平均 | 8.547 8 | 7.314 8 | 0.927 0 | 17.897 6 | 11.811 4 | |
NSST-PCNN | R | 8.518 7 | 7.437 4 | 0.949 1 | 17.892 4 | 10.994 6 |
G | 8.534 2 | 7.257 4 | 0.917 6 | 17.755 1 | 11.927 7 | |
B | 8.522 0 | 7.247 1 | 0.914 8 | 17.798 9 | 12.357 7 | |
平均 | 8.525 0 | 7.314 0 | 0.927 2 | 17.815 5 | 11.760 0 | |
本文方法 | R | 8.601 5 | 7.469 7 | 0.962 0 | 18.380 4 | 9.573 6 |
G | 8.579 4 | 7.273 4 | 0.930 3 | 18.146 8 | 10.940 8 | |
B | 8.573 5 | 7.247 8 | 0.929 2 | 18.240 1 | 11.353 0 | |
平均 | 8.584 8 | 7.330 3 | 0.940 5 | 18.255 8 | 10.622 5 | |
注:加粗字体表示各评价指标中的最优值。 |
表 2
第2组图像的6种融合方法定量评价结果
Table 2
Quantitative comparison results of six methods for the group 2 images
融合方法 | 波段 | 清晰度 | 信息熵 | 相关系数 | 空间频率 | 光谱扭曲度 |
IHS | R | 9.270 4 | 6.837 2 | 0.777 6 | 16.563 1 | 58.889 0 |
G | 9.223 5 | 6.669 7 | 0.703 7 | 16.440 1 | 57.874 8 | |
B | 9.124 3 | 6.769 9 | 0.726 5 | 16.260 3 | 58.674 5 | |
平均 | 9.206 1 | 6.758 9 | 0.735 9 | 16.421 2 | 58.479 4 | |
PCA | R | 16.662 2 | 7.472 3 | 0.744 9 | 30.748 0 | 35.176 0 |
G | 14.568 0 | 7.311 6 | 0.750 7 | 26.807 7 | 31.848 1 | |
B | 15.510 8 | 7.420 1 | 0.720 7 | 28.557 7 | 33.257 8 | |
平均 | 15.580 3 | 7.401 3 | 0.738 8 | 28.704 5 | 33.427 3 | |
CNMF | R | 17.070 2 | 7.664 9 | 0.796 9 | 30.588 5 | 31.557 5 |
G | 14.418 2 | 7.463 1 | 0.815 3 | 25.997 0 | 27.965 6 | |
B | 13.794 5 | 7.532 1 | 0.808 5 | 24.634 0 | 28.604 9 | |
平均 | 15.094 3 | 7.553 4 | 0.806 9 | 27.073 2 | 29.376 0 | |
NSCT-PCNN | R | 20.287 7 | 7.757 4 | 0.859 6 | 37.273 0 | 26.302 9 |
G | 20.270 0 | 7.649 4 | 0.847 2 | 37.215 3 | 24.757 4 | |
B | 19.970 4 | 7.631 7 | 0.837 3 | 36.790 7 | 25.809 3 | |
平均 | 20.176 0 | 7.679 5 | 0.848 0 | 37.093 0 | 25.623 2 | |
NSST-PCNN | R | 20.249 1 | 7.759 4 | 0.861 7 | 37.133 9 | 26.140 3 |
G | 20.224 5 | 7.648 7 | 0.849 2 | 37.071 4 | 24.616 2 | |
B | 19.928 8 | 7.639 7 | 0.839 9 | 36.646 4 | 25.623 0 | |
平均 | 20.134 1 | 7.682 6 | 0.850 3 | 36.950 6 | 25.459 8 | |
本文方法 | R | 20.382 4 | 7.718 8 | 0.874 4 | 37.567 0 | 24.695 8 |
G | 20.310 6 | 7.628 8 | 0.862 5 | 37.414 4 | 23.467 2 | |
B | 19.962 9 | 7.590 1 | 0.851 2 | 36.966 3 | 24.795 7 | |
平均 | 20.218 6 | 7.645 9 | 0.862 7 | 37.315 9 | 24.319 6 | |
注:加粗字体表示各评价指标中的最优值。 |
表 3
第3组图像的6种融合方法定量评价结果
Table 3
Quantitative comparison results of six methods for the group 3 images
融合方法 | 波段 | 清晰度 | 信息熵 | 相关系数 | 空间频率 | 光谱扭曲度 |
IHS | R | 7.675 1 | 6.615 3 | 0.563 3 | 14.510 9 | 45.592 6 |
G | 7.776 8 | 6.843 5 | 0.544 4 | 14.629 0 | 44.450 7 | |
B | 7.590 4 | 6.354 2 | 0.436 8 | 14.368 4 | 45.110 1 | |
平均 | 7.680 8 | 6.604 3 | 0.514 8 | 14.502 8 | 45.051 1 | |
PCA | R | 12.769 8 | 7.296 8 | 0.494 1 | 23.326 3 | 44.917 9 |
G | 12.352 6 | 7.415 3 | 0.558 7 | 22.499 0 | 40.763 9 | |
B | 11.284 6 | 7.115 2 | 0.471 5 | 20.601 9 | 39.690 7 | |
平均 | 12.135 7 | 7.275 8 | 0.508 1 | 22.142 4 | 41.790 8 | |
CNMF | R | 14.050 8 | 7.573 1 | 0.572 6 | 24.820 1 | 50.135 0 |
G | 12.643 6 | 7.628 8 | 0.653 2 | 22.473 9 | 49.806 3 | |
B | 11.732 2 | 7.396 8 | 0.574 3 | 20.890 0 | 46.742 4 | |
平均 | 12.808 9 | 7.532 9 | 0.600 0 | 22.728 0 | 48.894 6 | |
NSCT-PCNN | R | 15.703 3 | 7.487 2 | 0.795 7 | 28.796 0 | 27.334 4 |
G | 15.763 6 | 7.501 5 | 0.828 6 | 28.933 3 | 23.580 4 | |
B | 15.674 5 | 7.412 7 | 0.773 9 | 28.753 6 | 25.783 9 | |
平均 | 15.713 8 | 7.467 1 | 0.799 4 | 28.827 6 | 25.566 2 | |
NSST-PCNN | R | 15.720 0 | 7.496 3 | 0.804 5 | 28.765 9 | 26.713 9 |
G | 15.748 4 | 7.504 6 | 0.834 2 | 28.868 8 | 23.148 1 | |
B | 15.667 9 | 7.415 5 | 0.782 2 | 28.695 9 | 25.258 5 | |
平均 | 15.712 1 | 7.472 1 | 0.807 0 | 28.776 9 | 25.040 2 | |
本文方法 | R | 15.864 1 | 7.490 6 | 0.815 0 | 29.0730 | 26.245 8 |
G | 15.868 1 | 7.508 9 | 0.844 4 | 29.1286 | 22.668 7 | |
B | 15.776 1 | 7.423 0 | 0.790 2 | 28.9392 | 25.203 2 | |
平均 | 15.836 1 | 7.474 2 | 0.816 5 | 29.0469 | 24.705 9 | |
注:加粗字体表示各评价指标中的最优值。 |
4 结论
针对多光谱与全色图像融合方法中存在的光谱信息损失以及纹理细节不够丰富的问题,提出了一种形态学滤波和改进脉冲耦合神经网络(PCNN)的NSST域多光谱与全色图像融合方法。该方法在对原始图像进行NSST变换的基础上,研究了基于形态学滤波和高通调制框架(HPM)的低频子带融合规则,以及基于改进脉冲耦合神经网络(PCNN)的高频子带融合规则,融合结果达到了增强图像局部空间细节的目的,同时很好地保留了多光谱图像的光谱信息。仿真实验结果表明本文方法对于融合全色图像空间细节信息和多光谱图像光谱信息的有效性和优越性。
下一步的工作需要对多尺度分解过程中保持边缘的滤波进行研究,克服该方法由于空间不连续性所带来的人造纹理和灰度不均衡现象,进一步改善多光谱图像融合质量。
参考文献
-
[1] Carper W J, Lillesand T M, Kiefer P W. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data[J]. Photogrammetric Engineering and Remote Sensing, 1990, 56(4): 459–467.
-
[2] PohlC, Van Genderen J L. Review article multisensor image fusion in remote sensing:Concepts, methods and applications[J]. International Journal of Remote Sensing, 1998, 19(5): 823–854. [DOI:10.1080/014311698215748]
-
[3] Laben C A, Brower B V. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening: US, 6011875[P]. 2000-01-04.
-
[4] Zhou J, Civco D L, Silander J A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data[J]. International Journal of Remote Sensing, 1998, 19(4): 743–757. [DOI:10.1080/014311698215973]
-
[5] Wu Y Q, Wang Z L. Multispectral and panchromatic image fusion using chaotic Bee Colony optimization in NSST domain[J]. Journal of Remote Sensing, 2017, 21(4): 549–557. [吴一全, 王志来. 混沌蜂群优化的NSST域多光谱与全色图像融合[J]. 遥感学报, 2017, 21(4): 549–557. ] [DOI:10.11834/jrs.20176273]
-
[6] Overturf L A, Comer M L, Delp E J. Color image coding using morphological pyramid decomposition[J]. IEEE Transactions on Image Processing, 1995, 4(2): 177–185. [DOI:10.1109/83.342191]
-
[7] Mukhopadhyay S, Chanda B. Fusion of 2D grayscale images using multiscale morphology[J]. Pattern Recognition, 2001, 34(10): 1939–1949. [DOI:10.1016/S0031-3203(00)00123-0]
-
[8] Laporterie F, Amram O, Flouzat GE, et al. Data fusion thanks to an improved morphological pyramid approach: comparisonloop on simulated images and application to SPOT 4 data[C]//IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Honolulu, HI, USA: IEEE, 2000: 2117-2119.[DOI: 10.1109/IGARSS.2000.858314]
-
[9] Bejinariu S I, Rotaru F, Niţă C D, et al. Morphological wavelets for panchromatic and multispectral image fusion[C]//Proceedings of the 5th International Workshop Soft Computing Applications. Berlin, Heidelberg: Springer, 2013: 573-583.[DOI: 10.1007/978-3-642-33941-7_50]
-
[10] Restaino R, Vivone G, Mura M D, et al. Fusion of multispectral and panchromatic images based on morphological operators[J]. IEEE Transactions on Image Processing, 2016, 25(6): 2882–2895. [DOI:10.1109/TIP.2016.2556944]
-
[11] Guo K H, Labate D. Optimally sparse multidimensional representation using shearlets[J]. SIAM Journal on Mathematical Analysis, 2007, 39(1): 298–318. [DOI:10.1137/060649781]
-
[12] Kutyniok G, Labate D. Shearlets:Multiscale Analysis for Multivariate Data[M]. Birkhäuser Basel: Springer, 2012.
-
[13] Easley G, Labate D, Lim W Q. Sparse directional image representations using the discrete shearlet transform[J]. Applied and Computational Harmonic Analysis, 2008, 25(1): 25–46. [DOI:10.1016/j.acha.2007.09.003]
-
[14] Blasch E P. Biological information fusion using a PCNN and belief filtering[C]//Proceedings of the International Joint Conference on Neural Networks. Washington, DC, USA: IEEE, 1999, 4: 2792-2795.[DOI: 10.1109/IJCNN.1999.833523]
-
[15] Soille P. Morphological Image Analysis:Principles and Applications[M]. Berlin, Germany: Springe, 2003: 84-87.
-
[16] Vivone G, Restaino R, Mura M D, et al. Contrast and error-based fusion schemes for multispectral image pansharpening[J]. IEEE Geoscience and Remote Sensing Letters, 2014, 11(5): 930–934. [DOI:10.1109/LGRS.2013.2281996]
-
[17] Peli E. Contrast in complex images[J]. Journal of the Optical Society of America A, 1990, 7(10): 2032–2040. [DOI:10.1364/JOSAA.7.002032]
-
[18] Liao Y, Huang W L, Shang L, et al. Image fusion based on shearlet and improved PCNN[J]. ComputerEngineering and Applications, 2014, 50(2): 142–146. [廖勇, 黄文龙, 尚琳, 等. Shearlet与改进PCNN相结合的图像融合[J]. 计算机工程与应用, 2014, 50(2): 142–146. ] [DOI:10.3778/j.issn.1002-8331.1207-0258]
-
[19] Wang Z N, Yu X C, Zhang L B. A remote sensing image fusion algorithm based on non-negative matrix factorization[J]. Journal of Beijing Normal University:Natural Science, 2008, 44(4): 387–390. [王仲妮, 余先川, 张立保. 基于受限的非负矩阵分解的多光谱和全色遥感影像融合[J]. 北京师范大学学报:自然科学版, 2008, 44(4): 387–390. ] [DOI:10.3321/j.issn:0476-0301.2008.04.012]
-
[20] Qu X B, Yan J W, Xiao H Z, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled Contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12): 1508–1514. [屈小波, 闫敬文, 肖弘智, 等. 非降采样Contourlet域内空间频率激励的PCNN图像融合算法[J]. 自动化学报, 2008, 34(12): 1508–1514. ] [DOI:10.3724/SP.J.1004.2008.01508]
-
[21] Jiang P, Zhang Q, Li J, et al. Fusion algorithm for infrared and visible image based on NSST and adaptive PCNN[J]. Laser & Infrared, 2014, 44(1): 108–113. [江平, 张强, 李静, 等. 基于NSST和自适应PCNN的图像融合算法[J]. 激光与红外, 2014, 44(1): 108–113. ] [DOI:10.3969/j.issn.1001-5078.2014.01.024]
-
[22] Stathaki T. Image Fusion:Algorithms and Applications[M]. Amsterdam: Academic Press, 2008: 367-392.