|
发布时间: 2019-09-16 |
医学图像处理 |
|
|
收稿日期: 2019-01-22; 修回日期: 2019-03-16
基金项目: 山西省自然科学基金项目(201701D121062)
第一作者简介:
王丽芳, 1977年生, 女, 副教授, 博士, 主要研究方向为机器视觉、大数据处理、医学图像处理。E-mail:wsm2004@nuc.edu.cn;
秦品乐, 男, 副教授, 主要研究方向为机器视觉、大数据处理、3维重建。E-mail:QPL@nuc.edu.cn; 蔺素珍, 女, 教授, 主要研究方向为图像处理与文物虚拟修复。E-mail:lsz@nuc.edu.cn; 高媛, 女, 副教授, 主要研究方向为机器视觉、大数据处理、3维重建。E-mail:yuan_g@126.com; 张程程, 女, 硕士研究生, 主要研究方向为医学图像配准与机器学习。E-mail:1738490350@qq.com.
中图法分类号: TP391.4
文献标识码: A
文章编号: 1006-8961(2019)09-1588-16
|
摘要
目的 针对基于稀疏编码的医学图像融合方法存在的细节保存能力不足的问题,提出了一种基于卷积稀疏表示双重字典学习与自适应脉冲耦合神经网络(PCNN)的多模态医学图像融合方法。方法 首先通过已配准的训练图像去学习卷积稀疏与卷积低秩子字典,在两个字典下使用交替方向乘子法(ADMM)求得其卷积稀疏表示系数与卷积低秩表示系数,通过与对应的字典重构得到卷积稀疏与卷积低秩分量;然后利用改进的的拉普拉斯能量和(NSML)以及空间频率和(NMSF)去激励PCNN分别对卷积稀疏与卷积低秩分量进行融合;最后将融合后的卷积稀疏与卷积低秩分量进行组合得到最终的融合图像。结果 对灰度图像与彩色图像进行实验仿真并与其他融合方法进行比较,实验结果表明,所提出的融合方法在客观评估和视觉质量方面明显优于对比的6种方法,在4种指标上都有最优的表现;与6种多模态图像融合方法相比,3组实验平均标准差分别提高了7%、10%、5.2%;平均互信息分别提高了33.4%、10.9%、11.3%;平均空间频率分别提高了8.2%、9.6%、5.6%;平均边缘评价因子分别提高了16.9%、20.7%、21.6%。结论 与其他稀疏表示方法相比,有效提高了多模态医学图像融合的质量,更好地保留了源图像的细节信息,使融合图像的信息更加丰富,符合人眼的视觉特性,有效地辅助医生进行疾病诊断。
关键词
医学图像融合; 双重字典学习; 卷积稀疏; 卷积低秩; 脉冲耦合神经网络
Abstract
Objective The fusion of multimodal medical images is an important medical imaging method that integrates complementary information from multimodal images to produce new composite images. Sparse representation has achieved great success in medical image fusion in the past few years. However, given that the sparse representation method is based on sliding window technology, the ability to preserve the details of the fused image is insufficient. Therefore, a multimodal medical image fusion method based on convolution sparse representation double dictionary learning and adaptive PCNN(pulse couple neural network) is proposed. Method According to the low-rank and sparsity characteristics of the image, the method decomposes the source image into two parts and constructs a double dictionary based on convolution sparse representation. The sparse component contains a large amount of detail textures, and the low-rank component contains basic information such as contour brightness. First, the low-rank feature and sparse feature are extracted from the training image to form two basic dictionaries to represent the test image. The dictionary learning model is improved by adding low-rank and sparse constraints to the low-rank component and the sparse component, respectively, to enhance the discriminability of the double dictionary. In the process of dictionary learning, the method of alternating iterative updating is divided into three parts:auxiliary variable update, sparse coding, and dictionary updates. A convolutional sparse and convoluted low-rank sub-dictionary for the training image is obtained by a three-part cyclic update. Then, the total variation regularization is incorporated into the image decomposition model, and the Fourier domain-based alternating direction multiplier method is used to obtain the representation coefficients of the source image sparse component and the low-rank component in the respective sub-dictionaries. The process is alternately divided into two parts iteratively, namely, convolution sparse coefficient update and convolution low-rank coefficient update. Second, the sparse component of the source image is obtained by convolving the convolutional sparse coefficient with the corresponding sub-dictionary. Similarly, the convolution low-rank coefficient is convolved with the corresponding sub-dictionary to obtain the low-rank component of the source image. The novel sum-modified spatial frequency of the sparse component is calculated as the external excitation of the pulse-coupled neural network to preserve the details of the image, and the link strength is adaptively determined by the regional average gradient to obtain a firing map of the sparse component. The novel sum-modified Laplacian of the low-rank component is calculated as the external excitation of the pulse coupled neural network, and the link strength is adaptively determined by the regional average gradient to obtain the firing map. The fused sparse components are obtained by comparing the number of firings of different sparse components. Similarly, the low-rank components of different source images are fused through the firing map. Finally, the fused image is obtained by combining convolution sparse and convolution low-rank components, thereby further improving the quality of the fused image. Result Three sets of brain multimodal medical images (namely, CT/MR, MR/PET, and MR/SPECT) were simulated and compared with those processed by other fusion methods. Experimental results show that the proposed fusion method is significantly superior to the six methods according to objective evaluation and visual quality comparison and has the best performance in four indicators. Compared with the six multi-mode image fusion methods, the mean standard deviation of the three groups of experiments increased by 7%, 10%, and 5.2%, respectively. The average mutual information increased by 33.4%, 10.9%, and 11.3%, respectively. The average spatial frequency increased by 8.2%, 9.6%, and 5.6%, respectively. The average marginal evaluation factors increased by 16.9%, 20.7%, and 21.6%, respectively. Conclusion Compared with other sparse representation methods, the proposed algorithm effectively improves the quality of multimodal medical image fusion, better preserves the detailed information of the source image, enriches the information of the fused image, and conforms to the visual characteristics of the human eye, thereby effectively assisting doctors in diagnosing diseases.
Key words
medical image fusion; double dictionary learning; convolution sparse; convolution low rank; pulse coupled neural network(PCNN)
0 引言
医学影像学在现代医学治疗和诊断等临床应用中有着至关重要的作用,而临床医生为了获取足够的信息,通常要用到多个模态的医学图像,如计算机断层扫描(CT)、正电子发射断层扫描(PET)以及磁共振成像(MR)等。然而不同的单一成像模态的医学影像都具有局限性,不能反映医生所需的所有细节信息,因此将来自不同模态的医学图像的信息整合到一幅图像中,将不同源图像中包含的互补信息相结合,能够帮助医生为医疗诊断做出更好的决策。
医学影像融合已经成为一种有效的信息融合技术。医学图像融合技术可以分为以下两大类:基于空间域的算法和基于变换域的算法[1]。基于空间域的算法通常是通过图像块或者像素级的梯度信息去解决一个图像融合问题,但是容易导致图像细节丢失。与基于空间域的算法相比,基于变换域的算法在表征信号的局部特征方面有更多的优势,可以弥补基于空间域算法的细节表达不足的缺点,同时由于其更符合人类视觉系统(HVS)的生理机制,所以基于变换域的多尺度融合算法被认为是多模态图像融合中的主流融合方法[2]。典型的基于变换域的方法包括基于多尺度分析的方法、基于稀疏表示的方法以及图像金字塔的方法。多尺度分析(MTA)的方法包括离散小波变换(DWT)[3]、轮廓波变换(CT)[4]、剪切变换(ST)[5]等。这些方法都能很好地表示源图像的结构信息,但是它们能提取的方向信息是有限的。此外,由于这些方法缺少平移不变性,容易导致伪吉布斯效应[6]。而非下采样轮廓波变换(NSCT)和非下采样剪切变换(NSST)虽然具有平移不变的性质,能够很好地克服伪吉布斯效应[7-8],但是这些方法有较高的计算复杂度,限制了它们的实际应用。
基于稀疏表示和字典学习的融合方法由于压缩感知理论[9]的提出也被广泛应用于图像融合[10-12],并且它一般优于大多数传统融合方法[13]。在基于稀疏表示的方法中,因为相邻的图像块是互相重叠的,导致每个像素的结果是多值的,在理想情况下,每个像素的多个值应该是相等的,以保持重叠图像块的一致性[14],但由于稀疏编码是在每个图像块上分别独立执行的,忽略了图像像素之间的相关性,导致每个像素有多个不等的值,同时大多数融合方法是通过采取聚合和平均的策略来获得每个像素的最终值[15],这将导致图像细节在融合中容易被平滑甚至丢失。Yin等人[16]通过将源图像作为训练数据获得联合字典,然后使用最大加权多范数融合规则融合图像,但是其仍存在细节丢失的问题。Zong等人[17]提出基于分类图像块的融合方法,使用方向梯度直方图(HOG)特征对图像块分类建立子字典,虽然减少了细节丢失,但仍不可避免导致一些细节被平滑。Zhang等人[18]提出一种改进的多任务鲁棒稀疏表示结合空间上下文信息的融合方法,但该方法与大多数基于稀疏表示的方法一样,针对局部图像块编码而不是对整个图像,因此仍会导致细节保存不佳的情况。并且通常的基于稀疏编码的融合方法都只采用了一个字典来表示源图像不同的形态部分,容易导致源图像固有信息的次优表示, 不能充分地包含图像的特征信息,造成图像的信息丢失。Rong等人[15]提出了对图像的不同成分采用不同的字典稀疏表示,但其字典是以解析的方式得到的,难以实现自适应描述。
针对上述问题,本文提出了基于卷积稀疏表示双重字典学习与自适应脉冲耦合神经网络(PCNN)的多模态医学图像融合方法(CSRDD-APCNN)。该方法将源图像分为稀疏与低秩两部分,构建了基于卷积稀疏表示的双重字典。其中,稀疏成分包含大量细节纹理,低秩成分包含轮廓亮度以及微小细节等基本信息。为了增强双重字典的区分性,分别对低秩成分与稀疏成分添加低秩与稀疏约束改进字典学习模型,并通过交替迭代求解,得到双重字典。然后,将总变分(TV)正则化(regularization)纳入图像分解模型,并通过基于傅里叶域的交替方向乘子法(ADMM)算法[19]求解,将源图像分解为卷积稀疏与卷积低秩分量。同时为了充分保留源图像的细节信息,分别将卷积稀疏分量的改进的空间频率和(NMSF)与卷积低秩分量的改进的拉普拉斯能量和(NSML)作为自适应脉冲耦合神经网络的输入激励,通过输出的点火次数将各分量融合。最后通过组合卷积稀疏与卷积低秩分量来得到融合图像,进一步提高了融合图像的质量。
1 相关工作
1.1 卷积稀疏表示(CSR)
在卷积稀疏表示中,其基本思想是将源图像
$ \arg \mathop {\min }\limits_{\left| {{\mathit{\boldsymbol{x}}_m}} \right|} \frac{1}{2}\left\| {\sum\limits_m {{\mathit{\boldsymbol{d}}_m}} * {\mathit{\boldsymbol{x}}_m} - \mathit{\boldsymbol{s}}} \right\|_2^2 + \lambda \sum\limits_m {{{\left\| {{\mathit{\boldsymbol{x}}_m}} \right\|}_1}} $ | (1) |
式中,
$ \begin{array}{*{20}{c}} {\mathop {\arg }\limits_{\left| {{\mathit{\boldsymbol{d}}_m}} \right|} \mathop {\min }\limits_{\left\{ {{\mathit{\boldsymbol{x}}_m},k} \right\}} \frac{1}{2}\sum\limits_k {\left\| {\sum\limits_m {{\mathit{\boldsymbol{d}}_m} * {\mathit{\boldsymbol{x}}_{m,k}} - {\mathit{\boldsymbol{s}}_k}} } \right\|_2^2} + }\\ {\lambda \sum\limits_{m,k} {{{\left\| {{\mathit{\boldsymbol{x}}_{m,k}}} \right\|}_1}} \quad {\rm{ s}}{\rm{.t}}{\rm{. }}{{\left\| {{\mathit{\boldsymbol{d}}_m}} \right\|}_2} = 1\forall m} \end{array} $ | (2) |
式中,
1.2 脉冲耦合神经网络(PCNN)
1.2.1 PCNN模型
PCNN是脉冲耦合神经元横向连接的2维网络,每个神经元对应一个特定的像素,其受到周围神经元的影响。单个神经元如图 1所示,包含3部分:接受域、调制域和脉冲发生器。PCNN模型描述为
$ \left\{ {\begin{array}{*{20}{l}} {{F_{ij}}(n) = {S_{ij}}}\\ {{L_{ij}}(n) = {{\rm{e}}^{ - {\alpha _L}}}{L_{ij}}(n - 1) + {V_L}\sum\limits_{k,l} {{W_{ijkl}}} {Y_{kl}}(n - 1)}\\ {{U_{ij}}(n) = {F_{ij}}(n)\left( {1 + \beta {L_{ij}}(n)} \right)}\\ {{\theta _{ij}}(n) = {{\rm{e}}^{ - {\alpha _\theta }}}{\theta _{ij}}(n - 1) + {V_\theta }{Y_{ij}}(n - 1)}\\ {{Y_{ij}}(n) = \left\{ {\begin{array}{*{20}{c}} \begin{array}{l} 1\\ 0 \end{array}&\begin{array}{l} {U_{ij}}(n) > {\theta _{ij}}(n)\\ 其他 \end{array} \end{array}} \right.} \end{array}} \right. $ | (3) |
式中,
该PCNN模型主要反映在
1.2.2 自适应链接强度$β$
在PCNN融合算法中,链接强度
2 多模态医学图像融合方法
针对目前基于稀疏表示的融合方法存在的细节保存能力有限的问题,本文提出了一种基于卷积稀疏表示双重字典学习与自适应PCNN的多模态医学图像融合方法。其整体流程如图 2所示,首先通过建立双重字典学习模型,学习出具有判别性的卷积低秩子字典与卷积稀疏子字典,然后通过图像分解模型,将源图像
2.1 判别性卷积低秩和卷积稀疏双重字典学习
2.1.1 双重字典学习模型的建立
定义
$ {\mathit{\boldsymbol{y}}_m} = \mathit{\boldsymbol{y}}_m^c + \mathit{\boldsymbol{y}}_m^t $ |
式中,
$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_{\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{d}}_c},{\mathit{\boldsymbol{d}}_t}}\\ {{\mathit{\boldsymbol{x}}_c},{\mathit{\boldsymbol{x}}_t}} \end{array}} \left\{ {\frac{1}{2}\sum\limits_{m = 1}^M {\left( {\left\| {{\mathit{\boldsymbol{y}}_m} - \sum\limits_{k = 1}^{{K_c}} {\mathit{\boldsymbol{d}}_k^c} * \mathit{\boldsymbol{x}}_{m,k}^c - \sum\limits_{k = 1}^{{K_t}} {\mathit{\boldsymbol{d}}_k^t} * \mathit{\boldsymbol{x}}_{m,k}^t} \right\|_2^2 + } \right.} } \right.}\\ {\left. {\left. {{\lambda _c}\sum\limits_{k = 1}^{{K_c}} {{{\left\| {\mathit{\boldsymbol{x}}_{m,k}^c} \right\|}_1}} + {\lambda _t}\sum\limits_{k = 1}^{{K_t}} {{{\left\| {\mathit{\boldsymbol{x}}_{m,k}^t} \right\|}_1}} } \right)} \right\}} \end{array} $ | (4) |
式中,
$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_{\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{D}}_c},{\mathit{\boldsymbol{D}}_t}}\\ {{\mathit{\boldsymbol{x}}_c},{\mathit{\boldsymbol{x}}_t}} \end{array}} \left\{ {\frac{1}{2}\left\| {\mathit{\boldsymbol{Y}} - {\mathit{\boldsymbol{D}}_c}{\mathit{\boldsymbol{x}}_c} - {\mathit{\boldsymbol{D}}_t}{\mathit{\boldsymbol{x}}_t}} \right\|_2^2 + } \right.}\\ {\left. {{\mathit{\boldsymbol{\lambda }}_c}{{\left\| {{\mathit{\boldsymbol{x}}_c}} \right\|}_1} + {\lambda _t}{{\left\| {{\mathit{\boldsymbol{x}}_t}} \right\|}_1}} \right\}} \end{array} $ | (5) |
为了增强字典的判别能力,通过对卷积稀疏与卷积低秩分量引入稀疏约束和低秩约束,对卷积低秩子字典添加低秩约束并引入辅助变量
$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_{\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{D}}_c}}&{{\mathit{\boldsymbol{D}}_t}}\\ {{\mathit{\boldsymbol{x}}_c}}&{{\mathit{\boldsymbol{x}}_t}} \end{array}} \left\{ {\frac{1}{2}\left\| {\mathit{\boldsymbol{Y}} - {\mathit{\boldsymbol{Y}}_c} - {\mathit{\boldsymbol{Y}}_t}} \right\|_2^2 + \left\| {{\mathit{\boldsymbol{Y}}_c} - {\mathit{\boldsymbol{D}}_c}{\mathit{\boldsymbol{x}}_c}} \right\|_2^2 + } \right.}\\ {\left\| {{\mathit{\boldsymbol{Y}}_t} - {\mathit{\boldsymbol{D}}_t}{\mathit{\boldsymbol{x}}_t}} \right\|_2^2 + {\lambda _c}{{\left\| {{\mathit{\boldsymbol{x}}_c}} \right\|}_1} + {\lambda _t}{{\left\| {{\mathit{\boldsymbol{x}}_t}} \right\|}_1} + }\\ {\left. {{\lambda _1}{{\left\| {{\mathit{\boldsymbol{Y}}_c}} \right\|}_ * } + {\lambda _2}{{\left\| {{\mathit{\boldsymbol{Y}}_t}} \right\|}_1} + {\lambda _3}{{\left\| {{\mathit{\boldsymbol{D}}_c}} \right\|}_ * }} \right\}} \end{array} $ | (6) |
式中,||·||*是核范数,表示矩阵奇异值的和。
2.1.2 双重字典学习模型求解
式(6)的求解对于各变量为非凸优化问题,本文采用交替迭代的方式将式(6)分解为3个子问题求解[24]。包括辅助变量更新、稀疏编码与字典更新3个阶段:
1) 随机初始化字典及其他变量,在更新辅助变量
$ \mathop {\min }\limits_{{\mathit{\boldsymbol{Y}}_c}} \left\{ {\left\| {\mathit{\boldsymbol{Y}} - {\mathit{\boldsymbol{Y}}_c} - {\mathit{\boldsymbol{Y}}_t}} \right\|_2^2 + \left\| {{\mathit{\boldsymbol{Y}}_c} - {\mathit{\boldsymbol{D}}_c}{\mathit{\boldsymbol{x}}_c}} \right\|_2^2 + {\lambda _1}{{\left\| {{\mathit{\boldsymbol{Y}}_c}} \right\|}_*}} \right\} $ | (7) |
式中,通过引入辅助变量
$ \mathop {\min }\limits_{{{\mathit{\boldsymbol{\tilde Y}}}_c}} \left\{ {\left\| {\mathit{\boldsymbol{\hat Y}} - {{\mathit{\boldsymbol{\tilde Y}}}_c}} \right\|_2^2 + {\lambda _1}{{\left\| {{{\mathit{\boldsymbol{\tilde Y}}}_c}} \right\|}_*}} \right\} $ | (8) |
该式可看做为核范数最小化的问题,通过奇异值阈值算法[25]求解。更新变量
$ \mathop {\min }\limits_{{{\mathit{\boldsymbol{\tilde Y}}}_t}} \left\{ {\left\| {\mathit{\boldsymbol{\hat Y}} - {{\mathit{\boldsymbol{\tilde Y}}}_t}} \right\|_2^2 + {\lambda _2}{{\left\| {{{\mathit{\boldsymbol{\tilde Y}}}_t}} \right\|}_1}} \right\} $ | (9) |
式中,
2) 稀疏编码阶段。在更新卷积低秩系数
$ {\mathit{\boldsymbol{x}}_c} = \arg \mathop {\min }\limits_{{\mathit{\boldsymbol{x}}_c}} \left\{ {\left\| {{\mathit{\boldsymbol{Y}}_c} - {\mathit{\boldsymbol{D}}_c}{\mathit{\boldsymbol{x}}_c}} \right\|_2^2 + {\lambda _c}{{\left\| {{\mathit{\boldsymbol{x}}_c}} \right\|}_1}} \right\} $ | (10) |
同理在更新卷积稀疏系数
$ {\mathit{\boldsymbol{x}}_t} = \arg \mathop {\min }\limits_{{\mathit{\boldsymbol{x}}_t}} \left\{ {\left\| {{\mathit{\boldsymbol{Y}}_t} - {\mathit{\boldsymbol{D}}_t}{\mathit{\boldsymbol{x}}_t}} \right\|_2^2 + {\lambda _t}{{\left\| {{\mathit{\boldsymbol{x}}_t}} \right\|}_1}} \right\} $ | (11) |
显然式(10)(11)为凸函数,通过对字典与系数进行离散傅里叶变换,将其变换到频域,可将空间域的卷积转化为频域的乘积,再通过交替方向乘子法(ADMM)[19]求解,从而得到更新的低秩系数与稀疏系数矩阵
3) 字典更新阶段。在更新卷积低秩字典
$ \mathop {\min }\limits_{{\mathit{\boldsymbol{D}}_c}} \left\| {{\mathit{\boldsymbol{Y}}_c} - {\mathit{\boldsymbol{D}}_c}{\mathit{\boldsymbol{x}}_c}} \right\|_2^2 + {\lambda _3}{\left\| \mathit{\boldsymbol{Z}} \right\|_*}\quad {\rm{ s}}{\rm{. t}}{\rm{. }}{\mathit{\boldsymbol{D}}_c} = \mathit{\boldsymbol{Z}} $ | (12) |
在更新卷积稀疏字典
$ \mathop {\min }\limits_{{D_t}} \left\| {{\mathit{\boldsymbol{Y}}_t} - {\mathit{\boldsymbol{D}}_t}{\mathit{\boldsymbol{x}}_t}} \right\|_2^2 $ | (13) |
式(12)为约束优化问题,由增广拉格朗日乘子法[2]求解,可得更新的子字典
2.2 卷积稀疏和低秩图像分解
根据2.1节学习到的卷积低秩和卷积稀疏子字典
$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_{{\mathit{\boldsymbol{x}}_{c,k}},{\mathit{\boldsymbol{x}}_{t,k}}} \frac{1}{2}\left\| {A - \sum\limits_{k = 1}^{{K_c}} {{\mathit{\boldsymbol{d}}_{c,k}}} * {\mathit{\boldsymbol{x}}_{c,k}} - \sum\limits_{k = 1}^{{K_t}} {{\mathit{\boldsymbol{d}}_{t,k}}} * {\mathit{\boldsymbol{x}}_{t,k}}} \right\|_2^2 + }\\ {{\lambda _1}\sum\limits_{k = 1}^{{K_c}} {{{\left\| {{\mathit{\boldsymbol{x}}_{c,k}}} \right\|}_1}} + {\lambda _2}\sum\limits_{k = 1}^{{K_t}} {{{\left\| {{\mathit{\boldsymbol{x}}_{t,k}}} \right\|}_1}} + }\\ {{\lambda _3}{f_{{\rm{TV}}}}\left( {\sum\limits_{k = 1}^{{K_c}} {{\mathit{\boldsymbol{d}}_{c,k}}} * {\mathit{\boldsymbol{x}}_{c,k}}} \right)} \end{array} $ | (14) |
式中,
式(14)通过更新卷积稀疏与卷积低秩系数交替迭代进行,分为以下两个步骤:
1) 在更新
$ \begin{array}{*{20}{c}} {\mathop {\min }\limits_{{\mathit{\boldsymbol{x}}_{c,k}}} \frac{1}{2}\left\| {\mathit{\boldsymbol{A}} - \sum\limits_{k = 1}^{{K_t}} {{\mathit{\boldsymbol{d}}_{t,k}} * {\mathit{\boldsymbol{x}}_{t,k}}} - \sum\limits_{k = 1}^{{K_c}} {{\mathit{\boldsymbol{d}}_{c,k}}} } \right\|_2^2 + }\\ {{\lambda _1}\sum\limits_{k = 1}^{{K_c}} {{{\left\| {{\mathit{\boldsymbol{x}}_{c,k}}} \right\|}_1}} + {\lambda _3}{f_{{\rm{TV}}}}\left( {\sum\limits_{k = 1}^{{K_c}} {{\mathit{\boldsymbol{d}}_{c,k}} * {\mathit{\boldsymbol{x}}_{c,k}}} } \right)} \end{array} $ | (15) |
该式可视为带TV正则的稀疏编码问题,通过基于离散傅里叶变换的ADMM[19]算法求解,由此得到更新的低秩系数矩阵。
2)在更新
$ \begin{array}{*{20}{c}} {{{\mathit{\boldsymbol{\hat x}}}_{t,k}} = \mathop {\arg \min }\limits_{{\mathit{\boldsymbol{x}}_{t,k}}} \frac{1}{2}\left\| {\mathit{\boldsymbol{A}} - \sum\limits_{k = 1}^{{K_e}} {{\mathit{\boldsymbol{d}}_{c,k}} * {\mathit{\boldsymbol{x}}_{c,k}}} - } \right.}\\ {\left. {\sum\limits_{k = 1}^{{K_t}} {{\mathit{\boldsymbol{d}}_{t,k}} * {\mathit{\boldsymbol{x}}_{t,k}}} } \right\|_2^2 + {\lambda _2}\sum\limits_{k = 1}^{{K_t}} {{{\left\| {{\mathit{\boldsymbol{x}}_{t,k}}} \right\|}_1}} } \end{array} $ | (16) |
为了便于优化,引入辅助变量
$ \begin{array}{*{20}{c}} {{{\mathit{\boldsymbol{\hat x}}}_{t,k}} = \arg \mathop {\min }\limits_{{\mathit{\boldsymbol{x}}_{t,k}}} \frac{1}{2}\left\| {{\mathit{\boldsymbol{y}}_r} - \sum\limits_{k = 1}^{{K_t}} {{\mathit{\boldsymbol{d}}_{t,k}} * {\mathit{\boldsymbol{x}}_{t,k}}} } \right\|_2^2 + }\\ {{\lambda _2}\sum\limits_{k = 1}^{{K_t}} {{{\left\| {{\mathit{\boldsymbol{x}}_{t,k}}} \right\|}_1}} } \end{array} $ | (17) |
在求解式(17)时,通过ADMM框架[23]交替迭代求解,由此可得到更新的稀疏系数矩阵。最后循环执行上述两个步骤,直至达到预设的循环次数为止,输出源图像
2.3 基于自适应PCNN的融合过程
2.3.1 卷积稀疏分量融合规则
由2.2节可得源图像
1) 计算稀疏分量改进的空间频率
$ \begin{array}{*{20}{c}} {{D_f} = \left[ {\sqrt {\frac{1}{{\left( {M - 1} \right)\left( {N - 1} \right)}}\left[ {\sum\limits_{i = 2}^M {\sum\limits_{j = 2}^N {{{\left( {\mathit{\boldsymbol{S}}\left( {i,j} \right) - \mathit{\boldsymbol{S}}\left( {i - 1,j - 1} \right)} \right)}^2}} } } \right]} + } \right.}\\ {{{\left. {\sqrt {\frac{1}{{\left( {M - 1} \right)\left( {N - 1} \right)}}\left[ {\sum\limits_{i = 2}^M {\sum\limits_{j = 2}^N {{{\left( {\mathit{\boldsymbol{S}}\left( {i - 1,j} \right) - \mathit{\boldsymbol{S}}\left( {i,j - 1} \right)} \right)}^2}} } } \right]} } \right]}^2}} \end{array} $ | (18) |
$ \mathit{\boldsymbol{NMSF}} = \sqrt {\frac{1}{{M(N - 1)}}\left[ {\sum\limits_{i = 1}^M {\sum\limits_{j = 2}^N {(\mathit{\boldsymbol{S}}(} } i,j - 1) - \mathit{\boldsymbol{S}}(i,j){)^2}} \right] + \frac{1}{{(M - 1)N}}\left[ {\sum\limits_{i = 2}^M {\sum\limits_{j = 1}^N {(\mathit{\boldsymbol{S}}(} } i,j) - \mathit{\boldsymbol{S}}(i - 1,j){)^2}} \right] + {D_f}} $ | (19) |
2) 计算区域平均梯度
$ \begin{array}{l} \begin{array}{*{20}{l}} {\mathit{\boldsymbol{\bar g}}(i,j) = \frac{1}{9}\sum\limits_{a = - 1}^1 {\mathop \sum \limits_{b = - 1}^1 } }\\ {{{\left\{ {\frac{{\left[ {{\mathit{\boldsymbol{g}}_1}(i + a,i + b) + {\mathit{\boldsymbol{g}}_2}(i + a,i + b)} \right]}}{2}} \right\}}^{1/2}}} \end{array}\\ \begin{array}{*{20}{l}} {{\mathit{\boldsymbol{g}}_1}(i,j) = {{[\mathit{\boldsymbol{S}}(i,j) - \mathit{\boldsymbol{S}}(i, + 1,j)]}^2}}\\ {{\mathit{\boldsymbol{g}}_2}(i,j) = {{[\mathit{\boldsymbol{S}}(i,j) - \mathit{\boldsymbol{S}}(i,j + 1)]}^2}} \end{array} \end{array} $ | (20) |
3) 采用PCNN模型求得卷积稀疏分量的点火频率图
$ F_{i,j}^S[n] = \mathit{\boldsymbol{NMSF}}_{i,j}^S $ | (21) |
$ \beta (i,j) = {\mathit{\boldsymbol{\bar g}}^S}(i,j) $ | (22) |
4) 在PCNN达到迭代次数
$ {\mathit{\boldsymbol{F}}_S}(i,j) = \left\{ {\begin{array}{*{20}{l}} {{\mathit{\boldsymbol{S}}_A}(i,j)}&{T_{i,j,A}^S\left[ {{n_{\max }}} \right] \ge T_{i,j,B}^S\left[ {{n_{\max }}} \right]}\\ {{\mathit{\boldsymbol{S}}_B}(i,j)}&{T_{i,j,A}^S\left[ {{n_{\max }}} \right] < T_{i,j,A}^S\left[ {{n_{\max }}} \right]} \end{array}} \right. $ | (23) |
2.3.2 卷积低秩分量融合规则
经2.2节后,源图像
1) 计算卷积低秩分量改进的拉普拉斯能量和
$ \mathit{\boldsymbol{NSML}}(i,j) = \sum\limits_n {\sum\limits_k {\mathit{\boldsymbol{w}}(a,b)\mathit{\boldsymbol{F}}(i + a,j + b)} } $ | (24) |
$ \begin{array}{*{20}{c}} {\mathit{\boldsymbol{F}}(i,j) = \left| {2\mathit{\boldsymbol{C}}(i,j) - \mathit{\boldsymbol{C}}(i - 1,j) - \mathit{\boldsymbol{C}}(i + 1,j)} \right| + }\\ {\left| {2\mathit{\boldsymbol{C}}(i,j) - \mathit{\boldsymbol{C}}(i - 1,j) - \mathit{\boldsymbol{C}}(i + 1,j)} \right|} \end{array} $ | (25) |
式中,
$ \mathit{\boldsymbol{w}}(a,b) = \left[ {\begin{array}{*{20}{l}} {1/15}&{2/15}&{1/15}\\ {2/15}&{3/15}&{2/15}\\ {1/15}&{2/15}&{1/15} \end{array}} \right] $ |
2) 计算
3) 通过PCNN模型计算
$ \mathit{\boldsymbol{F}}_{i,j}^C[n] = \mathit{\boldsymbol{NSML}}_{i,j}^C $ | (26) |
$ \beta (i,j) = {\mathit{\boldsymbol{\bar g}}^C}(i,j) $ | (27) |
4) 同样根据PCNN输出的点火频率图
$ {\mathit{\boldsymbol{F}}_C}(i,j) = \left\{ {\begin{array}{*{20}{l}} {{\mathit{\boldsymbol{C}}_A}(i,j)}&{\mathit{\boldsymbol{T}}_{i,j,A}^C\left[ {{n_{\max }}} \right] \ge \mathit{\boldsymbol{T}}_{i,j,B}^C\left[ {{n_{\max }}} \right]}\\ {{\mathit{\boldsymbol{C}}_B}(i,j)}&{\mathit{\boldsymbol{T}}_{i,j,A}^C\left[ {{n_{\max }}} \right] < \mathit{\boldsymbol{T}}_{i,j,A}^C\left[ {{n_{\max }}} \right]} \end{array}} \right. $ |
2.4 融合过程
设已经配准的医学CT/MR源图像
1) 字典学习阶段。通过2.1节中的字典学习模型,使用训练数据集
2) 图像分解阶段。对源图像
3) 卷积稀疏分量融合阶段。求出卷积稀疏分量
4) 卷积低秩分量融合阶段。求出卷积低秩分量
5) 通过进一步组合融合后的分量得到最终的融合图像
$ \mathit{\boldsymbol{F}} = {\mathit{\boldsymbol{F}}_c} + {\mathit{\boldsymbol{F}}_s} $ |
3 实验结果与分析
3.1 实验参数
为了验证所提方法的性能,选取了3组经过配准的脑部多模态医学图像(大小均为256×256像素)作为待融合的源图像,其中源图像数据来源于哈佛大学医学院[28],包括脑血管疾病CT/MR图像、脑部肿瘤MR/PET图像、脑部肿瘤MR/SPECT图像。同时采取了6种对比算法进行比较,选取的对比算法包括:1)基于多尺度变换和稀疏表示的图像融合算法(MST-SR)[29];2)基于自适应稀疏表示的图像融合算法(ASR)[30];3)基于NSCT和单元链接PCNN的自适应医学图像融合算法(NSCT-PCNN)[31];4)基于联合补丁聚类的医学图像融合算法(JPC)[32];5)基于滑动窗口和稀疏表示的医学图像融合算法(SWT-SR)[33];6)基于非下采样轮廓波变换算法(NSCT)[34]。将本文方法记做(CSRDD-APCNN)。
同时在训练卷积稀疏与低秩字典时,所需的训练图像如图 5所示。对于MSR-SR方法而言,其字典训练样本是从20幅自然的图像中随机采样10万个图像块,其滑动步长设置为1,滑动窗口大小设置为8 × 8像素,字典的大小设置为64 × 512像素,并设置误差
本文方法涉及的参数是由多次实验调参所得,在这里对于字典学习模型的式(6)而言,将加权指数
3.2 评价标准
本文采用了4种评价指标对实验融合图像的质量进行定量评价:标准差(SD)[35]、标准化互信息(MI)[36]、空间频率(SF)以及边缘信息传递因子(
$ {I_{FA}}(f,a) = \sum\limits_{f,a} {{p_{FA}}} (f,a){\log _2}\frac{{{p_{FA}}(f,a)}}{{{p_F}(f){p_A}(a)}} $ |
$ {I_{FB}}(f,b) = \sum\limits_{f,b} {{p_{FB}}} (f,b){\log _2}\frac{{{p_{FB}}(f,b)}}{{{p_F}(f){p_B}(b)}} $ |
式中,
$ M_F^{AB} = {I_{FA}}(f,a) + {I_{FB}}(f,b) $ |
空间频率SF用来测量图像的整体活动和清晰度水平,该指标定义为
$ {f_{SF}} = \sqrt {f_{RF}^2 + f_{CF}^2} $ |
式中,
$ {f_{RF}} = \sqrt {\frac{1}{{M(N - 1)}}\sum\limits_{i = 0}^{M - 1} {\sum\limits_{j = 0}^{N - 2} {(\mathit{\boldsymbol{F}}(} } i,j + 1) - \mathit{\boldsymbol{F}}(i,j){)^2}} $ |
$ {f_{CF}} = \sqrt {\frac{1}{{(M - 1)N}}\sum\limits_{i = 0}^{M - 2} {\sum\limits_{j = 0}^{N - 1} {(\mathit{\boldsymbol{F}}(} } i + 1,j) - \mathit{\boldsymbol{F}}(i,j){)^2}} $ |
式中,
$ {Q^{AB/V}} = \frac{{\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {\left[ {Q_{i,j}^{AF}\mathit{\boldsymbol{w}}_{i,j}^x + Q_{i,j}^{BF}\mathit{\boldsymbol{w}}_{i,j}^y} \right]} } }}{{\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {\left[ {w_{i,j}^x + w_{i,j}^y} \right]} } }} $ |
式中,
$ Q_{i,j}^{AF} = Q_{g,i,j}^{AF}Q_{a,i,j}^{AF} $ |
$ Q_{i,j}^{BF} = Q_{g,i,j}^{BF}Q_{a,i,j}^{BF} $ |
式中,
3.3 实验结果分析
在图 7中给出了脑血管疾病CT/MR图像的融合结果,其中图像
NSCT方法都存在对比度降低的问题。从表 1中可以看出,以上方法在MI指标以及
表 1
脑血管疾病CT/MR图像融合结果性能比较
Table 1
Comparison of cerebrovascular disease CT/MR fusion results
融合方法 | 评价指标 | |||
SD | MI | SF | QAB/F | |
MST-SR | 68.857 3 | 2.320 9 | 24.546 1 | 0.510 3 |
ASR | 70.384 9 | 2.510 5 | 25.156 2 | 0.665 8 |
NSCT-PCNN | 62.203 3 | 2.761 8 | 23.104 4 | 0.550 2 |
JPC | 67.854 3 | 2.738 0 | 21.235 0 | 0.591 1 |
SWT-SR | 64.207 0 | 3.306 5 | 22.958 2 | 0.536 0 |
NSCT | 67.251 0 | 2.198 4 | 25.517 3 | 0.624 8 |
CSRDD-APCNN | 71.365 2 | 3.457 5 | 25.601 2 | 0.672 2 |
注:加粗字体表示最优结果。 |
在图 8中给出了脑部肿瘤疾病MR/PET图像的融合结果,其中图像
表 2
脑部肿瘤疾病MR/PET图像融合结果性能比较
Table 2
Comparison of brain tumor disease MR/PET fusion results
融合方法 | 评价指标 | |||
SD | MI | SF | QAB/F | |
MST-SR | 56.842 7 | 2.678 1 | 18.479 2 | 0.645 2 |
ASR | 54.241 6 | 2.696 0 | 19.210 7 | 0.624 0 |
NSCT-PCNN | 57.593 9 | 2.568 1 | 20.516 5 | 0.520 8 |
JPC | 57.204 3 | 2.663 7 | 18.235 5 | 0.530 0 |
SWT-SR | 53.336 1 | 2.580 5 | 19.315 2 | 0.553 2 |
NSCT | 44.251 2 | 2.303 1 | 18.640 7 | 0.512 0 |
CSRDD-APCNN | 58.798 2 | 2.854 7 | 20.871 0 | 0.675 3 |
注:加粗字体表示最优结果。 |
图 9中给出脑部肿瘤疾病的MR/SPECT图像的融合结果,图像
表 3
脑部肿瘤疾病MR/PET图像融合结果性能比较
Table 3
Comparison of brain tumor disease MR/SPECT fusion results
融合方法 | 评价指标 | |||
SD | MI | SF | QAB/F | |
MST-SR | 65.206 4 | 2.627 1 | 16.564 2 | 0.602 1 |
ASR | 67.513 3 | 2.659 3 | 15.607 6 | 0.547 7 |
NSCT-PCNN | 63.597 8 | 2.478 6 | 16.740 6 | 0.397 4 |
JPC | 66.752 0 | 2.567 2 | 15.832 1 | 0.572 0 |
SWT-SR | 65.129 0 | 2.854 6 | 16.824 9 | 0.549 0 |
NSCT | 63.452 0 | 2.575 1 | 16.473 2 | 0.572 7 |
CSRDD-APCNN | 68.625 6 | 2.917 7 | 17.242 2 | 0.644 6 |
注:加粗字体表示最优结果。 |
4 结论
针对基于稀疏的融合方法存在保留细节能力有限的问题,提出了一种通过学习具有判别性的卷积低秩与卷积稀疏双重字典模型与自适应PCNN相结合的图像融合方法,通过字典学习与卷积系数编码将图像分解为低秩分量与稀疏分量,低秩分量融合规则采用改进的拉普拉斯能量和(NSML)激励脉冲耦合神经网络的方法,稀疏分量融合规则采用改进的空间频率和(NMSF)激励脉冲耦合神经网络的方法。从实验仿真可知,所提出的融合方法在客观评估和视觉质量方面明显优于对比的方法,保留了更丰富的特征和细节信息。
但本文仍存在字典学习模型及分解模型上需预设参数的问题,若参数设置不合适,会很大程度地影响学习的字典效率以及影响图像融合质量。探索如何有效地自适应确定参数的问题,是我们下一步研究的重点。
参考文献
-
[1] Li S T, Kang X D, Hu J W, et al. Image matting for fusion of multi-focus images in dynamic scenes[J]. Information Fusion, 2013, 14(2): 147–162. [DOI:10.1016/j.inffus.2011.07.001]
-
[2] Lou J Q, Li J F, Dai W Z. Medical image fusion using non-subsampled shearlet transform[J]. Journal of Image and Graphics, 2017, 22(11): 1574–1583. [楼建强, 李俊峰, 戴文战. 非下采样剪切波变换的医学图像融合[J]. 中国图象图形学报, 2017, 22(11): 1574–1583. ] [DOI:10.11834/jig.170014]
-
[3] Li H, Manjunath B S, Mitra S K. Multi-sensor image fusion using the wavelet transform[C]//Proceedings of the 1st International Conference on Image Processing. Austin, TX, USA: IEEE, 1994: 51-55.[DOI: 10.1109/ICIP.1994.413273]
-
[4] Do M N, Vetterli M. The contourlet transform:an efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing, 2005, 14(12): 2091–2106. [DOI:10.1109/TIP.2005.859376]
-
[5] Kong W, Liu J P. Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network[J]. Optical Engineering, 2013, 52(1): 017001. [DOI:10.1117/1.OE.52.1.017001]
-
[6] Zhang B H, Lu X Q, Jia W T. A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain[J]. Optik, 2013, 124(20): 4104–4109. [DOI:10.1016/j.ijleo.2012.12.032]
-
[7] Bhatnagar G, Wu Q M J, Liu Z. Directive contrast based multimodal medical image fusion in NSCT domain[J]. IEEE Transactions on Multimedia, 2013, 15(5): 1014–1024. [DOI:10.1109/TMM.2013.2244870]
-
[8] Luo X Q, Zhang Z C, Zhang B C, et al. Image fusion with contextual statistical similarity and nonsubsampled shearlet transform[J]. IEEE Sensors Journal, 2017, 17(6): 1760–1771. [DOI:10.1109/JSEN.2016.2646741]
-
[9] Donoho D L. Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52(4): 1289–1306. [DOI:10.1109/TIT.2006.871582]
-
[10] Zhu Z Q, Chai Y, Yin H P, et al. A novel dictionary learning approach for multi-modality medical image fusion[J]. Neurocomputing, 2016, 214: 471–482. [DOI:10.1016/j.neucom.2016.06.036]
-
[11] Dong X, Wang L F, Qin P L, et al. CT/MR brain image fusion method via improved coupled dictionary learning[J]. Journal of Computer Applications, 2017, 37(6): 1722–1727, 1746. [董侠, 王丽芳, 秦品乐, 等. 改进耦合字典学习的脑部CT/MR图像融合方法[J]. 计算机应用, 2017, 37(6): 1722–1727, 1746. ] [DOI:10.11772/j.issn.1001-9081.2017.06.1722]
-
[12] Zhang X, Xue Y J, Tu S Q, et al. Remote sensing image fusion based on structural group sparse representation[J]. Journal of Image and Graphics, 2016, 21(8): 1106–1118. [张晓, 薛月菊, 涂淑琴, 等. 基于结构组稀疏表示的遥感图像融合[J]. 中国图象图形学报, 2016, 21(8): 1106–1118. ] [DOI:10.11834/jig.20160815]
-
[13] Zhang H, Patel V M. Convolutional sparse and low-rank coding-based image decomposition[J]. IEEE Transactions on Image Processing, 2018, 27(5): 2121–2133. [DOI:10.1109/TIP.2017.2786469]
-
[14] Gu S H, Zuo W M, Xie Q, et al. Convolutional sparse coding for image super-resolution[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 1823-1831.[DOI: 10.1109/ICCV.2015.212]
-
[15] Rong Y, Xiong S W, Gao Y S. Low-rank double dictionary learning from corrupted data for robust image classification[J]. Pattern Recognition, 2017, 72: 419–432. [DOI:10.1016/j.patcog.2017.06.038]
-
[16] Yin H P, Li Y X, Chai Y, et al. A novel sparse-representation-based multi-focus image fusion approach[J]. Neurocomputing, 2016, 216: 216–229. [DOI:10.1016/j.neucom.2016.07.039]
-
[17] Zong J J, Qiu T S. Medical image fusion based on sparse representation of classified image patches[J]. Biomedical Signal Processing and Control, 2017, 34: 195–205. [DOI:10.1016/j.bspc.2017.02.005]
-
[18] Zhang Q, Levine M D. Robust multi-focus image fusion using multi-task sparse representation and spatial context[J]. IEEE Transactions on Image Processing, 2016, 25(5): 2045–2058. [DOI:10.1109/TIP.2016.2524212]
-
[19] Garcia-Cardona C, Wohlberg B. Convolutional dictionary learning:a comparative review and new algorithms[J]. IEEE Transactions on Computational Imaging, 2018, 4(3): 366–381. [DOI:10.1109/TCI.2018.2840334]
-
[20] Bristow H, Eriksson A, Lucey S. Fast convolutional sparse coding[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013: 391-398.[DOI: 10.1109/CVPR.2013.57]
-
[21] Wohlberg B. Efficient algorithms for convolutional sparse representations[J]. IEEE Transactions on Image Processing, 2016, 25(1): 301–315. [DOI:10.1109/TIP.2015.2495260]
-
[22] Chai Y, Li H F, Guo M Y. Multifocus image fusion scheme based on features of multiscale products and pcnn in lifting stationary wavelet domain[J]. Optics Communications, 2011, 284(5): 1146–1158. [DOI:10.1016/j.optcom.2010.10.056]
-
[23] Wohlberg B. Efficient convolutional sparse coding[C]//Proceedings of 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing. Florence, Italy: IEEE, 2014: 7173-7177.[DOI: 10.1109/ICASSP.2014.6854992]
-
[24] Gabay D, Mercier B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation[J]. Computers & Mathematics with Applications, 1976, 2(1): 17–40. [DOI:10.1016/0898-1221(76)90003-1]
-
[25] Cai J F, Candès E J, Shen Z W. A singular value thresholding algorithm for matrix completion[J]. SIAM Journal on Optimization, 2010, 20(4): 1956–1982. [DOI:10.1137/080738970]
-
[26] Zhang Y D, Wang S H, Ji G L, et al. Exponential wavelet iterative shrinkage thresholding algorithm with random shift for compressed sensing magnetic resonance imaging[J]. Information Sciences, 2015, 10(1): 116–117. [DOI:10.1002/tee.22059]
-
[27] Lin Z C, Chen M M, Wu L Q, et al. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices[R]. Urbana: Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, 2009.
-
[28] The Whole Brain Atlas of Harvard Medical School.Accessed: Nov. 2, 2015.[EB/OL].[2019-01-22]http://www.med.harvard.edu/AANLIB/.
-
[29] Liu Y, Liu S P, Wang Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147–164. [DOI:10.1016/j.inffus.2014.09.004]
-
[30] Liu Y, Wang Z F. Simultaneous image fusion and denoising with adaptive sparse representation[J]. IET Image Processing, 2015, 9(5): 347–357. [DOI:10.1049/iet-ipr.2014.0311]
-
[31] Liu W. Adaptive medical image fusion method based on NSCT and unit-linking PCNN[C]//Proceedings of International Conference on Computer Engineering, Information Science & Application Technology. Atlantis: Atlantis Press, 2016.[DOI: 10.2991/iccia-16.2016.86]
-
[32] Kim M, Han D K, Ko H. Joint patch clustering-based dictionary learning for multimodal image fusion[J]. Information Fusion, 2016, 27: 198–214. [DOI:10.1016/j.inffus.2015.03.003]
-
[33] Yang B, Li S T. Multifocus image fusion and restoration with sparse representation[J]. IEEE Transactions on Instrumentation and Measurement, 2010, 59(4): 884–892. [DOI:10.1109/TIM.2009.2026612]
-
[34] Zhang Q, Guo B L. Multifocus image fusion using the nonsubsampled contourlet transform[J]. Signal Processing, 2009, 89(7): 1334–1346. [DOI:10.1016/j.sigpro.2009.01.012]
-
[35] Shi W Z, Zhu C Q, Tian Y, et al. Wavelet-based image fusion and quality assessment[J]. International Journal of Applied Earth Observation and Geoinformation, 2005, 6(3-4): 241–251. [DOI:10.1016/j.jag.2004.10.010]
-
[36] Qu G H, Zhang D L, Yan P F. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313–315. [DOI:10.1049/el:20020212]
-
[37] Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308–309. [DOI:10.1049/el:20000267]