Current Issue Cover
卷积稀疏与细节显著图解析的图像融合

杨培1, 高雷阜1, 訾玲玲2(1.辽宁工程技术大学运筹与优化研究院, 阜新 123000;2.辽宁工程技术大学电子与信息工程学院, 葫芦岛 125105)

摘 要
目的 针对图像融合中信息量不够丰富和边缘细节模糊问题,结合多尺度分析、稀疏表示和显著性特征等图像表示方法,提出一种卷积稀疏与细节显著图解析的图像融合方法。方法 首先构造一种自适应样本集,训练出契合度更高的字典滤波器组。然后将待融合图像进行多尺度分析得到高低频子图,对低频子图进行卷积稀疏表示,通过权重分析构建一种加权融合规则,得到信息量更加丰富的低频子图;对高频子图构造细节显著图,进行相似性分析,建立一种高频融合规则,得到边缘细节更加凸显的高频子图。最后进行相应逆变换得到最终图像。结果 实验在随机挑选的3组灰度图像集和4组彩色图像集上进行,与具有代表性的7种融合方法进行效果对比。结果表明,本文方法的视觉效果明显较优,平均梯度上依次平均提高39.3%、32.1%、34.7%、28.3%、35.8%、28%、30.4%;在信息熵上依次平均提高6.2%、4.5%、1.9%、0.4%、1.5%、2.4%、2.9%;在空间频率上依次平均提高31.8%、25.8%、29.7%、22.2%、28.6%、22.9%、25.3%;在边缘强度上依次平均提高39.5%、32.1%、35.1%、28.8%、36.6%、28.7%、31.3%。结论 本文方法在一定程度上解决了信息量不足的问题,较好地解决了图像边缘细节模糊的问题,使图像中奇异性更加明显的内容被保留下来。
关键词
Image fusion method of convolution sparsity and detail saliency map analysis

Yang Pei1, Gao Leifu1, Zi Lingling2(1.Institute for Optimization and Decision Analytics, Liaoning Technical University, Fuxin 123000, China;2.College of Electronic and Information Engineering, Liaoning Technical University, Huludao 125105, China)

Abstract
Objective Image fusion is the process of using multiple image information of the same scene according to certain rules to obtain better fusion image. The fusion image contains the outstanding features of the image to be fused, which can improve the utilization of image information and provide more accurate help for later decision-making based on the image. Multi-scale analysis method, sparse representation method, and saliency method are three kinds of image representation methods that can be used in image fusion. Multi-scale analysis method is an active field in image fusion, but only the appropriate transformation method can improve the performance of the fusion image. The sparse representation method has good performance for image representation, but multi-value representation of the image easily leads to the loss of details. The significance method is unique due to its ability to capture the outstanding target in the image. However, visual saliency is a subjective image description index, and the proper construction of saliency map is an urgent problem to be solved. To address the problem of insufficient information and fuzzy edge details in image fusion, an image fusion method of convolution sparsity and detail saliency map analysis is proposed, which combines the advantages of multi-scale analysis, sparse representation method, and saliency method and at the same time avoids their disadvantages as much as possible. Method First, to address the insufficient image information after fusion, a multi-directional method is proposed to construct the adaptive training sample set. Then through dictionary training, a more abundant dictionary filter bank suitable for the image to be fused is obtained. Second, low-and high-frequency subgraphs are obtained by multi-scale analysis. The low-frequency subgraph contains a lot of basic information of source image, and it is represented by convolution sparsity using the trained adaptive dictionary filter bank. In this way, the sparse matrix of global single-value representation is obtained. The activity of each pixel in the image can be represented by the L1 norm of this multidimensional sparse representation coefficient matrix. The more prominent its feature is, the more active the image is, so the weight can be measured by measuring the activity of the image to be fused. Through weight analysis, a weighted fusion rule is constructed to obtain the low-frequency subgraph with more abundant information. At the same time, to solve the problem of fuzzy edge details in the process of fusion, the fusion of high-frequency subgraphs is processed as follows. Because the high-frequency subgraph reflects the singularity of the image, a high-frequency detail saliency graph is constructed to highlight this feature. This detail saliency map is constructed by cross reconstruction of high-and low-frequency subgraphs. According to the distance difference between the high-frequency subgraph and the detail saliency map, similarity analysis is carried out, and a high-frequency fusion rule is established to obtain the high-frequency subgraph with more prominent edge details. Finally, the final fusion image is obtained by inverting the high-frequency and low-frequency subgraphs after fusion. Result In the experiment, three sets of gray image sets (including Mfi image set, Irvis image set, Medical image set) and four sets of color image sets (including Cmfi image set, Cirvis image set, Cmedical image set, Crsi image set) are randomly selected to verify the subjective visual and objective values of the proposed method NSST-Self-adaption-CSR-MAP (NSaCM). The results are compared with seven typical fusion methods, including convolutional sparse representation, convolutional sparsity-based morphological component analysis, parameter-adaptive pulse coupled-neural network, convolutional neural network, double-two direction sparse representation, wave-average-max, and non-subsampled contourlet transform and fusion. The experimental results show that the subjective visual effect of NSaCM is obviously better than that of other fusion methods. In the comparison of the average gradient, the objective numerical results of the seven methods mentioned above increased by 39.3%, 32.1%, 34.7%, 28.3%, 35.8%, 28%, and 30.4% on average, respectively. In the comparison of information entropy, the objective numerical results of the seven methods mentioned above increased by 6.2%, 4.5%, 1.9%, 0.4%, 1.5%, 2.4%, and 2.9% on average, respectively. In the comparison of spatial frequencies, the objective numerical results of the seven methods mentioned above increased by 31.8%, 25.8%, 29.7%, 22.2%, 28.6%, 22.9%, and 25.3% on average, respectively. In the comparison of edge strength, the objective numerical results of the seven methods mentioned above increased by 39.5%, 32.1%, 35.1%, 28.8%, 36.6%, 28.7%, and 31.3% on average, respectively. Conclusion NSaCM is suitable for gray and color images. The experimental results show that the fusion image obtained by the proposed method achieves better results in terms of both subjective and objective indicators. As seen from the numerical promotion of information entropy, the fusion image obtained by NSaCM contains more information, inherits more basic information of the source image, and solves the problem of insufficient information to a certain extent. From the numerical enhancement of average gradient, spatial frequency, and edge strength, NSaCM has a better expression of detail contrast. The overall activity of the fused image is higher, and the image content that makes image singularity more obvious is preserved in the fusion process, which solves the problem of edge detail blur in image fusion. However, the real-time performance of this method in terms of time consumption is still lacking, and further research is needed in the future.
Keywords

订阅号|日报