目的 针对基于稀疏编码的医学图像融合方法存在的细节保存能力不足的问题，提出了一种基于卷积稀疏表示双重字典学习与自适应PCNN的多模态医学图像融合方法。方法 该方法首先通过已配准的训练图像去学习卷积稀疏与卷积低秩子字典，在两个字典下使用交替方向乘子法（alternating direction multiplier method，ADMM）求得其卷积稀疏表示系数与卷积低秩表示系数，通过与对应的字典重构得到卷积稀疏与卷积低秩分量;然后利用新颖的拉普拉斯能量和（novel sum-modified Laplacian，NSML）以及新颖的空间频率和（novel sum-modified SF，NMSF）去激励脉冲耦合神经网络（pulse coupled neural network，PCNN）分别对卷积稀疏与卷积低秩分量进行融合;最后将融合后的卷积稀疏与卷积低秩分量进行组合得到最终的融合图像。结果 对灰度图像与彩色图像进行实验仿真并与其他融合方法进行比较，实验结果表明，所提出的融合方法在客观评估和视觉质量方面明显优于对比的6种方法，在四种指标上都有最优的表现；与6种多模态图像融合方法相比，3组实验平均标准差分别提高了7%、10%、5.2%；平均互信息分别提高了33.4%、10.9%、11.3%；平均空间频率分别提高了8.2%、9.6%、5.6%；平均边缘评价因子分别提高了16.9%、20.7%、21.6%。结论 本文算法相比较其他稀疏表示方法，有效提高了多模态医学图像融合的质量，更好的保留了源图像的细节信息，使融合图像的信息更加丰富，符合人眼的视觉特性，有效地辅助医生进行疾病诊断。
Objective Fusion of multimodal medical images is an important medical imaging method that integrates complementary information from multimodal images to produce new composite images. Sparse representation has also achieved great success in medical image fusion in the past few years. However, since the sparse representation method is based on sliding window technology, the preservation ability of the fused image details is insufficient. Therefore, a multi-modal medical image fusion method based on convolution sparse representation double dictionary learning and adaptive PCNN is proposed..Method According to the low rank and sparsity characteristics of the image, the method decomposes the source image into two parts, and constructs a double dictionary based on convolution sparse representation. The sparse component contains a lot of detail texture, and the low rank component contains basic information such as contour brightness.Firstly, the low-rank feature and sparse feature are extracted from the training image to form two basic dictionaries to represent the test image. At the same time, in order to enhance the discriminability of the double dictionary, the dictionary learning model is improved by adding low rank and sparse constraints to the low rank component and the sparse component respectively. In the process of dictionary learning, the method of alternating iterative updating is divided into three parts, including auxiliary variables update, sparse coding and dictionary updates. A convolutional sparse and convoluted low rank sub-dictionary for the training image is obtained by a three-part cyclic update. Then, the total variation regularization is incorporated into the image decomposition model, and the Fourier domain-based alternating direction multiplier method is used to obtain the representation coefficients of the source image sparse component and the low rank component in the respective sub-dictionaries. The process is divided into two parts alternately iteratively, including convolution sparse coefficient update and convolution low rank coefficient update. Secondly, the sparse component of the source image is obtained by convolving the convolutional sparse coefficient with the corresponding sub-dictionary. Similarly, the convolution low rank coefficient is convolved with the corresponding sub-dictionary to obtain the low-rank component of the source image. In order to preserve the details of the image,the novel sum-modified spatial frequency of the sparse component is calculated as external excitation of the pulse coupled neural network, and the link strength is adaptively determined by the regional average gradient to obtain a firing map of the sparse component.And the novel sum-modified Laplacian of the low rank component is calculated as the external excitation of the pulse coupled neural network, and the link strength is adaptively determined by the regional average gradient to obtain the firing map. By comparing the number of firings of different sparse components, the fused sparse components are obtained. Similarly, the low rank components of different source images are fused through the firing map. Finally, the fusion image is obtained by combining convolution sparse and convolution low rank components, which further improves the quality of the fused image.Result Three sets of brain multimodal medical images (including CT/MR images, MR/PET images and MR/SPECT images) were simulated and compared with other fusion methods. The experimental results show that the proposed fusion method is significantly superior to the six methods for objective evaluation and visual quality comparison, and has the best performance in four indicators. Compared with the six multi-mode image fusion methods, the mean standard deviation of the three groups of experiments was increased by 7%, 10% and 5.2%, respectively. The average mutual information increased by 33.4%, 10.9% and 11.3% respectively. The average spatial frequency increased by 8.2%, 9.6% and 5.6% respectively. The average marginal evaluation factors increased by 16.9%, 20.7% and 21.6% respectively.Conclusion Compared with other sparse representation methods, the proposed algorithm effectively improves the quality of multimodal medical image fusion, better preserves the detailed information of the source image, makes the information of the fused image richer, and conforms to the visual characteristics of the human eye, which can be better help the doctor to diagnose the disease.