通道融合的渐进增强遥感图像全色锐化算法
Remote sensing pan-sharpening based on channel fusion and progressive enhancement
- 2023年28卷第1期 页码:305-316
收稿:2022-06-02,
修回:2022-10-10,
录用:2022-10-17,
纸质出版:2023-01-16
DOI: 10.11834/jig.220538
移动端阅览

浏览全部资源
扫码关注微信
收稿:2022-06-02,
修回:2022-10-10,
录用:2022-10-17,
纸质出版:2023-01-16
移动端阅览
目的
2
遥感图像融合的目的是将低空间分辨率的多光谱图像和对应的高空间分辨率全色图像融合为高空间分辨率的多光谱图像。为了解决上采样多光谱图像带来的图像质量下降和空间细节不连续问题,本文提出了渐进式增强策略,同时为了更好地融合两种图像互补的信息,提出在通道维度上进行融合的策略。
方法
2
构建了一种端到端的网络,网络分为两个阶段:渐进尺度细节增强阶段和通道融合阶段。考虑到上采样低空间分辨率多光谱图像导致的细节模糊问题,在第1阶段将不同尺度的全色图像作为额外的信息,通过两个细节增强模块逐步增强多光谱图像;在第2阶段,全色图像在多光谱图像的每个通道上都通过结构保持模块进行融合,更好地利用两种图像的互补信息,获得高空间分辨率的多光谱图像。
结果
2
实验在GaoFen-2和QuickBird数据集上与表现优异的8种方法进行了比较,本文算法在有参考指标峰值信噪比(peak signal-to-noise ratio,PSNR)、结构相似度(structural similarity,SSIM)、相关系数(correlation coefficient,CC)和总体相对误差(erreur relative globale adimensionnelle de synthese,ERGAS)等评价指标上均取得最优值。在GaoFen-2数据集上PSNR、CC和ERGAS分别平均提高了0.872 dB、0.01和0.109,在QuickBird数据集上分别平均提高了0.755 dB、0.011和0.099。
结论
2
本文算法在空间分辨率和光谱保持方面都取得了良好的效果,生成了质量更高的融合结果。
Objective
2
Remote sensing (RS) image fusion issue is focused on developing high-resolution multispectral (HRMS) images through the integration of low-resolution multispectral (LRMS) images and corresponding panchromatic (PAN) high spatial resolution images. Pan-sharpening has been widely developing as a pre-processing tool in the context of multiple vision applications like object detection
environmental surveillance
landscape monitoring
and scenario segmentation. The key issue for pan-sharpening is concerned of different and specific information gathering from multi-source images. The pan-sharpening methods can be divided into multiple methods in related to 1) component substitution (CS)
2) multi-resolution analysis (MRA)
3) model-based
and 4) deep-learning-based. The CS-based easy-use method is challenged for the severe spectral-distorted problem of multi-features-derived between PAN and LRMS images. The multi-resolution analysis (MRA) methods can be used to extract the spatial features from PAN images by multi-scale transformation. These features of high resolution are melted into the up-sampled LRMS images. Although spatial details can be preserved well by these methods
spectral information is likely to be corrupted by the features-melted. For model-based methods
an optimized algorithm is complicated and time-consuming for the model. The deep-learning-based method is qualified but two challenging problems to be resolved: 1) multispectral images up-sampling-derived image quality degradation; 2) multichannel variability-lacked insufficient integration. To alleviate the problems mentioned above
we implement a channel-fused strategy to mine two modalities of information better. Additionally
to resolve the image quality degradation caused by up-sampling multispectral images
a detailed progressive-enhanced module is proposed as well.
Method
2
Most of deep learning-based methods are linked to up-sample the multispectral image straightforward to maintain the same size as the panchromatic image
which degrades the image quality and lose some of the spatial details. To obtain enhanced results gradually
we carry out an implementation for progressive scale detail enhancement via the information of multi-scale panchromatic images. A channel fusion strategy is used to fuse two images in terms of an enhanced multispectral image and the corresponding panchromatic image. The effective and efficient information of the two modalities can be captured for the HRMS-predicted. The process of channel fusion can be summarized in three steps
which are 1) decomposition
2) fusion
and 3) reconstruction. Specifically
each channel of the enhanced multispectral image is concatenated with the panchromatic image in the decomposition
and a shallow feature is obtained by two 3×3 convolutional layers. The following fusion step is based on a new fusion strategy in terms of 8 structure-preserved modules over the channels. Each structure-preserved module has four branches
the number of them is equal to the number of the channels in the multispectral image
and each branch can be used to extract features from convolutional layers
while residual connections are added to each branch for efficient information transfer. For the reconstruction
to reconstruct high-resolution multispectral images
the obtained features of each channel are first re-integrated through remapping the features.
Result
2
Our model is compared to 8 state-of-the-art saliency models
including the traditional approaches and deep learning methods on two datasets
called GaoFen-2 and QuickBird. The quantitative evaluation metrics are composed of peak signal-to-noise ratio (PSNR)
structural similarity (SSIM)
correlation coefficient (CC)
spectral angle mapper (SAM)
erreur relative globale adimensionnelle de synthese (ERGAS)
quality-with-no-reference (QNR)
$$D_\lambda$$
and
$$D_S $$
. Compared to the other results in two datasets
the PSNR
SSIM
CC
and ERGAS can be increased by 0.872 dB
0.005
0.01
and 0.109 on average for GaoFen-2 dataset
and each of the four factors are improved by 0.755 dB
0.011
0.004
and 0.099 on the QuickBird dataset. Furthermore
to clarify the effectiveness of different modules of the fusion algorithm
a series of ablation experiments are conducted.
Conclusion
2
A novel of framework is developed based on the two phases for pan-sharpening
which can effectively enhance the detail information of LRMS images and produce appealing HRMS images. The progressive detail enhancement step can enhance the LRMS images via the extra multi-scale PAN images-related information fusion
while the channel fusion step can fuse the channel features in terms of structure-preserved modules. To verify the effectiveness of our designs
a series of ablation studies are carried out. Experimental results on several widely-used datasets are also provided to demonstrate the advances of our method in comparison with other state-of-the-art methods.
Alparone L, Aiazzi B, Baronti S, Garzelli A, Nencini F and Selva M. 2008. Multispectral and panchromatic data fusion assessment without reference. Photogrammetric Engineering and Remote Sensing, 74(2): 193-200 [DOI: 10.14358/PERS.74.2.193]
Ballester C, Caselles V, Igual L, Verdera J and Rougé B. 2006. A variational model for P+XS image fusion. International Journal of Computer Vision, 69(1): 43-58 [DOI: 10.1007/s11263-006-6852-x]
Cheng G and Han J W. 2016. A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117: 11-28 [DOI: 10.1016/j.isprsjprs.2016.03.014]
Fang F M, Li F, Shen C M and Zhang G X. 2013. A variational approach for pan-sharpening. IEEE Transactions on Image Processing, 22(7): 2822-2834 [DOI: 10.1109/TIP.2013.2258355]
Fang S, Chao L and Cao F Y. 2020. New pan-sharpening method based on adaptive weight mechanism. Journal of Image and Graphics, 25(3): 546-557
方帅, 潮蕾, 曹风云. 2020. 自适应权重注入机制遥感图像融合. 中国图象图形学报, 25(3): 546-557 [DOI: 10.11834/jig.190280]
Foody G M. 2003. Remote sensing of tropical forest environments: towards the monitoring of environmental resources for sustainable development. International Journal of Remote Sensing, 24(20): 4035-4046 [DOI: 10.1080/0143116031000103853]
Ghadjati M, Moussaoui A and Boukharouba A. 2019. A novel iterative PCA-based pansharpening method. Remote Sensing Letters, 10(3): 264-273 [DOI: 10.1080/2150704X.2018.1547443]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90]
Horé A and Ziou D. 2010. Image quality metrics: PSNR vs. SSIM//Proceedings of the 20th International Conference on Pattern Recognition. Istanbul, Turkey: IEEE: 2366-2369 [DOI: 10.1109/ICPR.2010.579]
Jin C, Deng L J, Huang T Z and Vivone G. 2022a. Laplacian pyramid networks: a new approach for multispectral pansharpening. Information Fusion, 78: 158-170 [DOI: 10.1016/j.inffus.2021.09.002]
Jin Z R, Zhuo Y W, Zhang T J, Jin X X, Jing S Q and Deng L J. 2022b. Remote sensing pansharpening by full-depth feature fusion. Remote Sensing, 14(3): #466 [DOI: 10.3390/rs14030466]
Kaneko S, Satoh Y and Igarashi S. 2003. Using selective correlation coefficient for robust image registration. Pattern Recognition, 36(5): 1165-1173 [DOI: 10.1016/S0031-3203(02)00081-X]
Laben C A and Brower B V. 2000. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. U.S., No. 6011875
Li C J, Song H H, Zhang K H, Zhang X L and Liu Q S. 2021. Spatiotemporal fusion of satellite images via conditional generative adversarial learning. Journal of Image and Graphics, 26(3): 714-726
李昌洁, 宋慧慧, 张开华, 张晓露, 刘青山. 2021. 条件生成对抗遥感图像时空融合. 中国图象图形学报, 26(3): 714-726 [DOI: 10.11834/jig.200219]
Liu J G. 2000. Smoothing filter-based intensity modulation: a spectral preserve image fusion technique for improving spatial details. International Journal of Remote Sensing, 21(18): 3461-3472 [DOI: 10.1080/014311600750037499]
Liu X Y, Liu Q J and Wang Y H. 2020. Remote sensing image fusion based on two-stream fusion network. Information Fusion, 55: 1-15 [DOI: 10.1016/j.inffus.2019.07.010]
Ma J Y, Yu W, Chen C, Liang P W, Guo X J and Jiang J J. 2020. Pan-GAN: an unsupervised pan-sharpening method for remote sensing image fusion. Information Fusion, 62: 110-120 [DOI: 10.1016/j.inffus.2020.04.006]
Masi G, Cozzolino D, Verdoliva L and Scarpa G. 2016. Pansharpening by convolutional neural networks. Remote Sensing, 8(7): #594 [DOI: 10.3390/rs8070594]
Mulders M A. 2001. Advances in the application of remote sensing and GIS for surveying mountainous land. International Journal of Applied Earth Observation and Geoinformation, 3(1): 3-10 [DOI: 10.1016/S0303-2434(01)85015-7]
Nagarajan V and Kolter J Z. 2017. Gradient descent GAN optimization is locally stable//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc. : 5591-5600
Nogueira K, Penatti O A B and Dos Santos J A. 2017. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognition, 61: 539-556 [DOI: 10.1016/j.patcog.2016.07.001]
Schowengerdt R A. 1980. Reconstruction of multispatial, multispectral image data using spatial frequency content. Photogrammetric Engineering and Remote Sensing, 46(10): 1325-1334
Tang L F, Zhang H, Xu H and Ma J Y. 2023. Deep learning-based image fusion: a survey. Journal of Image and Graphics, 28(1): 3-36
唐霖峰, 张浩, 徐涵, 马佳义. 2023. 基于深度学习的图像融合方法综述. 中国图象图形学报, 28(1): 3-36 [DOI: 10.11834/jig.220422]
Thomas C, Ranchin T, Wald L and Chanussot J. 2008. Synthesis of multispectral images to high spatial resolution: a critical review of fusion methods based on remote sensing physics. IEEE Transactions on Geoscience and Remote Sensing, 46(5): 1301-1312 [DOI: 10.1109/tgrs.2007.912448]
Tu T M, Su S C, Shyu H C and Huang P S. 2001. A new look at IHS-like image fusion methods. Information Fusion, 2(3): 177-186 [DOI: 10.1016/S1566-2535(01)00036-7]
Vivone G, Restaino R and Chanussot J. 2018. Full scale regression-based injection coefficients for panchromatic sharpening. IEEE Transactions on Image Processing, 27(7): 3418-3431 [DOI: 10.1109/TIP.2018.2819501]
Vivone G, Simões M, Mura M D, Restaino R, Bioucas-Dias J M, Licciardi G A and Chanussot J. 2015. Pansharpening based on semiblind deconvolution. IEEE Transactions on Geoscience and Remote Sensing, 53(4): 1997-2010 [DOI: 10.1109/TGRS.2014.2351754]
Wald L, Ranchin T and Mangolini M. 1997. Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images. Photogrammetric Engineering and Remote Sensing, 63(6): 691-699
Wald L. 2000. Quality of high resolution synthesised images: Is there a simple criterion//Third Conference "Fusion of Earth data: merging point measurements, raster maps and remotely sensed images". SEE/URISCA: 99-103
Wang D, Bai Y P, Wu C Y, Li Y, Shang C J and Shen Q. 2021. Convolutional LSTM-based hierarchical feature fusion for multispectral pan-sharpening. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-16 [DOI: 10.1109/TGRS.2021.3104221]
Wang D, Li Y, Ma L, Bai Z W and Chan J C W. 2019. Going deeper with densely connected convolutional neural networks for multispectral pansharpening. Remote Sensing, 11(22): #2608 [DOI: 10.3390/rs11222608]
Wang Y D, Deng L J, Zhang T J and Wu X. 2021. SSconv: explicit spectral-to-spatial convolution for pansharpening//Proceedings of the 29th ACM International Conference on Multimedia. [s. l.]: ACM: 4472-4480 [DOI: 10.1145/3474085.3475600]
Wang Z, Bovik A C, Sheikh H R and Simoncelli E P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612 [DOI: 10.1109/TIP.2003.819861]
Wei Y C, Yuan Q Q, Shen H F and Zhang L P. 2017. Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geoscience and Remote Sensing Letters, 14(10): 1795-1799 [DOI: 10.1109/LGRS.2017.2736020]
Xie W Y, Cui Y H, Li Y S, Lei J, Du Q and Li J J. 2021. HPGAN: hyperspectral pansharpening using 3-D generative adversarial networks. IEEE Transactions on Geoscience and Remote Sensing, 59(1): 463-477 [DOI: 10.1109/TGRS.2020.2994238]
Xu H, Ma J Y, Shao Z F, Zhang H, Jiang J J and Guo X J. 2021a. SDPNet: a deep network for pan-sharpening with enhanced information representation. IEEE Transactions on Geoscience and Remote Sensing, 59(5): 4120-4134 [DOI: 10.1109/TGRS.2020.3022482]
Xu S, Zhang J S, Zhao Z X, Sun K, Liu J M and Zhang C X. 2021b. Deep gradient projection networks for pan-sharpening//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 1366-1375 [DOI: 10.1109/CVPR46437.2021.00142]
Yang J F, Fu X Y, Hu Y W, Huang Y, Ding X H and Paisley J. 2017. PanNet: a deep network architecture for pan-sharpening//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1753-1761 [DOI: 10.1109/ICCV.2017.193]
Yokoya N, Yairi T and Iwasaki A. 2012. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Transactions on Geoscience and Remote Sensing, 50(2): 528-537 [DOI: 10.1109/TGRS.2011.2161320]
Yuhas R H, Goetz A F H and Boardman J W. 1992. Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm [EB/OL]. [2022-06-02]. https://ntrs.nasa.gov/api/citations/19940012238/downloads/19940012238.pdf
相关作者
相关机构
京公网安备11010802024621