卷积稀疏与细节显著图解析的图像融合
Image fusion method of convolution sparsity and detail saliency map analysis
- 2021年26卷第10期 页码:2433-2449
收稿:2020-07-21,
修回:2020-9-21,
录用:2021-9-28,
纸质出版:2021-10-16
DOI: 10.11834/jig.200205
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-07-21,
修回:2020-9-21,
录用:2021-9-28,
纸质出版:2021-10-16
移动端阅览
目的
2
针对图像融合中信息量不够丰富和边缘细节模糊问题,结合多尺度分析、稀疏表示和显著性特征等图像表示方法,提出一种卷积稀疏与细节显著图解析的图像融合方法。
方法
2
首先构造一种自适应样本集,训练出契合度更高的字典滤波器组。然后将待融合图像进行多尺度分析得到高低频子图,对低频子图进行卷积稀疏表示,通过权重分析构建一种加权融合规则,得到信息量更加丰富的低频子图;对高频子图构造细节显著图,进行相似性分析,建立一种高频融合规则,得到边缘细节更加凸显的高频子图。最后进行相应逆变换得到最终图像。
结果
2
实验在随机挑选的3组灰度图像集和4组彩色图像集上进行,与具有代表性的7种融合方法进行效果对比。结果表明,本文方法的视觉效果明显较优,平均梯度上依次平均提高39.3%、32.1%、34.7%、28.3%、35.8%、28%、30.4%;在信息熵上依次平均提高6.2%、4.5%、1.9%、0.4%、1.5%、2.4%、2.9%;在空间频率上依次平均提高31.8%、25.8%、29.7%、22.2%、28.6%、22.9%、25.3%;在边缘强度上依次平均提高39.5%、32.1%、35.1%、28.8%、36.6%、28.7%、31.3%。
结论
2
本文方法在一定程度上解决了信息量不足的问题,较好地解决了图像边缘细节模糊的问题,使图像中奇异性更加明显的内容被保留下来。
Objective
2
Image fusion is the process of using multiple image information of the same scene according to certain rules to obtain better fusion image. The fusion image contains the outstanding features of the image to be fused
which can improve the utilization of image information and provide more accurate help for later decision-making based on the image. Multi-scale analysis method
sparse representation method
and saliency method are three kinds of image representation methods that can be used in image fusion. Multi-scale analysis method is an active field in image fusion
but only the appropriate transformation method can improve the performance of the fusion image. The sparse representation method has good performance for image representation
but multi-value representation of the image easily leads to the loss of details. The significance method is unique due to its ability to capture the outstanding target in the image. However
visual saliency is a subjective image description index
and the proper construction of saliency map is an urgent problem to be solved. To address the problem of insufficient information and fuzzy edge details in image fusion
an image fusion method of convolution sparsity and detail saliency map analysis is proposed
which combines the advantages of multi-scale analysis
sparse representation method
and saliency method and at the same time avoids their disadvantages as much as possible.
Method
2
First
to address the insufficient image information after fusion
a multi-directional method is proposed to construct the adaptive training sample set. Then through dictionary training
a more abundant dictionary filter bank suitable for the image to be fused is obtained. Second
low-and high-frequency subgraphs are obtained by multi-scale analysis. The low-frequency subgraph contains a lot of basic information of source image
and it is represented by convolution sparsity using the trained adaptive dictionary filter bank. In this way
the sparse matrix of global single-value representation is obtained. The activity of each pixel in the image can be represented by the L
1
norm of this multidimensional sparse representation coefficient matrix. The more prominent its feature is
the more active the image is
so the weight can be measured by measuring the activity of the image to be fused. Through weight analysis
a weighted fusion rule is constructed to obtain the low-frequency subgraph with more abundant information. At the same time
to solve the problem of fuzzy edge details in the process of fusion
the fusion of high-frequency subgraphs is processed as follows. Because the high-frequency subgraph reflects the singularity of the image
a high-frequency detail saliency graph is constructed to highlight this feature. This detail saliency map is constructed by cross reconstruction of high-and low-frequency subgraphs. According to the distance difference between the high-frequency subgraph and the detail saliency map
similarity analysis is carried out
and a high-frequency fusion rule is established to obtain the high-frequency subgraph with more prominent edge details. Finally
the final fusion image is obtained by inverting the high-frequency and low-frequency subgraphs after fusion.
Result
2
In the experiment
three sets of gray image sets (including Mfi image set
Irvis image set
Medical image set) and four sets of color image sets (including Cmfi image set
Cirvis image set
Cmedical image set
Crsi image set) are randomly selected to verify the subjective visual and objective values of the proposed method NSST-Self-adaption-CSR-MAP (NSaCM). The results are compared with seven typical fusion methods
including convolutional sparse representation
convolutional sparsity-based morphological component analysis
parameter-adaptive pulse coupled-neural network
convolutional neural network
double-two direction sparse representation
wave-average-max
and non-subsampled contourlet transform and fusion. The experimental results show that the subjective visual effect of NSaCM is obviously better than that of other fusion methods. In the comparison of the average gradient
the objective numerical results of the seven methods mentioned above increased by 39.3%
32.1%
34.7%
28.3%
35.8%
28%
and 30.4% on average
respectively. In the comparison of information entropy
the objective numerical results of the seven methods mentioned above increased by 6.2%
4.5%
1.9%
0.4%
1.5%
2.4%
and 2.9% on average
respectively. In the comparison of spatial frequencies
the objective numerical results of the seven methods mentioned above increased by 31.8%
25.8%
29.7%
22.2%
28.6%
22.9%
and 25.3% on average
respectively. In the comparison of edge strength
the objective numerical results of the seven methods mentioned above increased by 39.5%
32.1%
35.1%
28.8%
36.6%
28.7%
and 31.3% on average
respectively.
Conclusion
2
NSaCM is suitable for gray and color images. The experimental results show that the fusion image obtained by the proposed method achieves better results in terms of both subjective and objective indicators. As seen from the numerical promotion of information entropy
the fusion image obtained by NSaCM contains more information
inherits more basic information of the source image
and solves the problem of insufficient information to a certain extent. From the numerical enhancement of average gradient
spatial frequency
and edge strength
NSaCM has a better expression of detail contrast. The overall activity of the fused image is higher
and the image content that makes image singularity more obvious is preserved in the fusion process
which solves the problem of edge detail blur in image fusion. However
the real-time performance of this method in terms of time consumption is still lacking
and further research is needed in the future.
Boyd S, Parikh N, Chu E, Peleato B and Eckstein J. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends ® in Machine Learning, 3(1): 1-122[DOI:10.1561/2200000016]
Bristow H, Eriksson A and Lucey S. 2013. Fast convolutional sparse coding//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 391-398[ DOI: 10.1109/CVPR.2013.57 http://dx.doi.org/10.1109/CVPR.2013.57 ]
Deng L N and Yao X F. 2017. Research on the fusion algorithm of infrared and visible images based on non-subsampled shearlet transform. Acta Electronica Sinica, 45(12): 2965-2970
邓立暖, 尧新峰. 2017. 基于NSST的红外与可见光图像融合算法. 电子学报, 45(12): 2965-2970[DOI:10.3969/j.issn.0372-2112.2017.12.019]
Heide F, Heidrich W and Wetzstein G. 2015. Fast and flexible convolutional sparse coding//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 5135-5143[ DOI: 10.1109/CVPR.2015.7299149 http://dx.doi.org/10.1109/CVPR.2015.7299149 ]
Liu C H, Qi Y and Ding W R. 2017a. Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Physics and Technology, 83: 94-102[DOI:10.1016/j.infrared.2017.04.018]
Liu Y, Chen X, Peng H and Wang Z F. 2017b. Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36: 191-207[DOI:10.1016/j.inffus.2016.12.001]
Liu Y, Chen X, Ward R K and Wang Z J. 2016. Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 23(12): 1882-1886[DOI:10.1109/LSP.2016.2618776]
Liu Y, Chen X, Ward R K and Wang Z J. 2019. Medical imagefusion via convolutional sparsity based morphological component analysis. IEEE Signal Processing Letters, 26(3): 485-489[DOI:10.1109/LSP.2019.2895749]
Ma J L, Zhou Z Q, Wang B and Zong H. 2017. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics and Technology, 82: 8-17[DOI:10.1016/j.infrared.2017.02.005]
Ma J Y, Ma Y and Li C. 2019. Infrared and visible image fusion methods and applications: a survey. Information Fusion, 45: 153-178[DOI:10.1016/j.inffus.2018.02.004]
Meng F J, Song M, Guo B L, Shi R X and Shan D L. 2017. Image fusion based on object region detection and Non-Subsampled Contourlet Transform. Computers and Electrical Engineering, 62: 375-383[DOI:10.1016/j.compeleceng.2016.09.019]
Skau E and Wohlberg B. 2018. A fast parallel algorithm for convolutional sparse coding//Proceedings of the 13th IEEE Image, Video, and Multidimensional Signal Processing Workshop. Zagorochoria, Greece: IEEE: 1-5[ DOI: 10.1109/IVMSPW.2018.8448536 http://dx.doi.org/10.1109/IVMSPW.2018.8448536 ]
Vishwakarma A and Bhuyan M K. 2019. Image fusion using adjustable non-subsampled shearlet transform. IEEE Transactions on Instrumentation and Measurement, 68(9): 3367-3378[DOI:10.1109/TIM.2018.2877285]
Wohlberg B. 2016. Efficient algorithms for convolutional sparse representations. IEEE Transactions on Image Processing, 25(1): 301-315[DOI:10.1109/TIP.2015.2495260]
Wu H L, Zhao S Z, Zhang J M and Liu C Q. 2019. Remote sensing image sharpening by integrating multispectral image super-resolution and convolutional sparse representation fusion. IEEE Access, 7: 46562-46574[DOI:10.1109/ACCESS.2019.2908968]
Yang P, Gao L F and Zi L L. 2019. Image fusion algorithm using spiral structure and gradient analysis. Journal of Frontiers of Computer Science and Technology, 13(8): 1390-1401
杨培, 高雷阜, 訾玲玲. 2019. 螺旋结构及梯度分析的图像融合算法. 计算机科学与探索, 13(8): 1390-1401[DOI:10.3778/j.issn.1673-9418.1810027]
Yang Y, Li L Y, Huang S Y, Zhang Y M and Lu H Y. 2020. Remote sensing image fusion with convolutional sparse representation based on adaptive dictionary learning. Journal of Signal Processing, 36(1): 125-138
杨勇, 李露奕, 黄淑英, 张迎梅, 卢航远. 2020. 自适应字典学习的卷积稀疏表示遥感图像融合. 信号处理, 36(1): 125-138[DOI:10.16798/j.issn.1003-0530.2020.01.016]
Yin M, Liu X N, Liu Y and Chen X. 2019. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Transactions on Instrumentation and Measurement, 68(1): 49-64[DOI:10.1109/TIM.2018.2838778]
Zhan L C, Zhuang Y and Huang L D. 2017. Infrared and visible images fusion method based on discrete wavelet transform. Journal of Computers, 28(2): 57-71[DOI:10.3966/199115592017042802005]
Zhang B H. 2016. Research on Multi-source Image Fusion Based on Multi-scale Transform and Sparse Representation. Shanghai: Shanghai University
张宝华. 2016. 基于多尺度变换和稀疏表示的多源图像融合算法研究. 上海: 上海大学
Zhang X Y, Ma Y, Fan F, Zhang Y and Huang J. 2017. Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition. Journal of the Optical Society of America A, 34(8): 1400-1410[DOI:10.1364/JOSAA.34.001400]
Zhu Z Q, Yin H P, Chai Y, Li Y X and Qi G Q. 2018. A novel multi-modality image fusion method based on image decomposition and sparse representation. Information Sciences, 432: 516-529[DOI:10.1016/j.ins.2017.09.010]
相关作者
相关机构
京公网安备11010802024621