刘应乾,严壮志(上海大学通信与信息工程学院, 上海 200444)
目的 格子玻尔兹曼（LB）方法作为一种兼具建模与快速求解偏微分方程（PDE）功能的方法已被成功应用于图像去噪、修复和分割。考虑到国内外尚未有LB方法在图像处理中研究进展的综述论文，为使即将进入该研究领域的学者比较全面地了解该方法的研究现状，本文对其进行系统综述。方法 着重分析了与图像去噪、修复、分割和3维图像处理密切相关的文献，将LB图像处理模型的构建分为自上而下和自下而上两种途径，对图像处理中的LB模型从宏观和微观两个角度进行分类。对模型的计算机实现算法、算法时间复杂度以及模型的具体应用进行分析与总结。最后，讨论了LB方法与PDE方法的本质区别，并指出几个尚未解决的问题。结果 第一，LB方法在图像处理中具有清晰的物理意义，像素值可视被为粒子密度值，像素值的改变可被视为受松弛时间和源项影响的粒子的重新分布；第二，各向异性扩散模型、非线性扩撒模型、线性扩散模型之间的微观区别在于松弛时间的差异，以上模型的时间复杂度依次降低，含源项扩散模型的时间复杂度除松弛时间以外还受外力项的影响；第三，自上而下的建模方法仅仅将LB视为PDE的一种解法，自下而上的建模方法从LB方法的物理意义出发，直接设计演化方程的关键参数，相对于第一种方法更为灵活；第四，LB算法固有并行，编程简单，当该方法被应用于并行平台时，图像数据量越大，GPU/CPU加速比越明显；第五，各向异性、非线性扩散模型可用于图像去噪、修复，含源项扩散模型中外力项的设计对图像分割质量有较大影响。结论 尽管LB方法作为一种固有的并行算法在3维图像去噪、配准和分割等快速图像处理领域具有极高的应用价值，但仍然存在边界条件处理、并行平台选择及优化等几个值得继续研究的问题。
Review of Lattice Boltzmann method for image processing
Liu Yingqian,Yan Zhuangzhi(School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China)
Objective Currently, images, such as 3D medical and high-resolution satellite images, provide considerable information, and image processing results are required in real time in many cases, such as clinical and meteorological. Parallel image processing devices, such as graphics processing unit (GPU) and field-programmable gate arrays, have been created for engineers at a convenient price. Partial differential equation (PDE) method is extensively used in image processing. However, its solution methods are time-consuming and difficult to be directly mapped to GPU. Traditional PDE solution methods are contradictory to the assumption that space and time are continuous. Thus, a method that is naturally parallel and simple and with clear physical meaning is required to simulate the macro model described by the PDE. Recently, lattice Boltzmann (LB) method has been applied to image denoising, inpainting, registration, and segmentation as an efficient and flexible method for modeling and solving PDE. However, a systematic review of the applications of LB for image processing has not been found in previous studies. Therefore, this paper proposes the abovementioned literature review to support scholars in gaining further insights into the frontier development of the topic. Method In this work, numerous public reports on the applications of LB for image denoising, inpainting, segmentation, and other 3D image processing were initially surveyed using the keywords, "lattice Boltzmann" and "image processing." These reports were classified according to their differences when scholars proposed LB mathematical models, namely, "top-down" or "bottom-up" approaches in terms of macro or micro, respectively. Then, programming algorithms, computing complexities, and application scenarios of LB for image processing were analyzed and summarized. Finally, essential differences between LB and other PDE-solving methods were concluded, and further research directions on this topic were proposed. Result First, LB model has a clear physical meaning. The general LB method consists of two steps as follows:a streaming step in which particles (or particle densities) move from node to node on a lattice and a collision step in which particles (or particle densities) are redistributed at each node. The two steps are governed by the LB evolution equation, where parameters relaxation time and source term decide the movement of the particles. The state of each node at the next moment is only related to the state of its neighboring nodes because the particles move along the b links. In image processing, each pixel value is considered particle densities, and changes in pixel value can be considered redistribution of particles that are decided by relaxation time τ and source term Fα in which image information, such as gradient and curvature, are embedded. Second, macro models can be classified into anisotropic, nonlinear, and linear diffusion models according to diffusion tensor. The microdifferences among the abovementioned macro models are decided by relaxation time. In the anisotropic diffusion model, τ has a different value on the b links, and the value changes along each link according to image information. In the nonlinear diffusion model, τ has the same value on the b links, and the value changes similarly to the anisotropic model. In the linear model, τ has the same and constant value on the b links. τ changes in the anisotropic and nonlinear models because the pixel value changes after each iteration; Fα changes in the model with external force terms. Two parameters must be computed in each iteration. Consequently, the computing complexities of the anisotropic, nonlinear, and linear diffusion models are decreased. The computing complexity of the LB models that include external force terms is also decided by source terms. Third, "top-down" approach uses the LB evolution equation to construct a macro model, which appears similar to existing image processing PDE. Then, τ and Fα are determined by PDE diffusion tensor and external force term separately. "Bottom-up" approach constructs τ and Fα directly according to the physical meaning of the LB method. The first approach uses the LB method as an alternative solving approach to PDE and requires high mathematical skill. The second approach used in constructing the mathematical model is easy and flexible. Fourth, the LB method is inherently parallel and naturally suited for GPU, which is ideally fit for explicit, local, and lattice-based computations. The advantage of LB in computing speed is obvious when image amount is large. The GPU/CPU speedup factors are larger in large 3D volumes than in small volumes. The programming of LB is simple. The core LB algorithm can be implemented with a few lines of codes and a brief coding time. Fifth, the anisotropic and nonlinear diffusion models can be used in image denoising and inpainting. The design of the external force terms significantly influences the quality of image segmentation. Conclusion The LB method has a high research value as a natural parallel algorithm in fast image processing, such as 3D image denoising, inpainting, and segmentation. However, several problems must still be further studied, such as image boundary processing, parallel platform selection, and optimization.