|
发布时间: 2019-11-16 |
图像理解和计算机视觉 |
|
|
收稿日期: 2019-03-19; 修回日期: 2019-05-30; 预印本日期: 2019-06-06
基金项目: 国家自然科学基金项目(61601213);中国博士后基金面上项目(2017M611252);辽宁省教育厅项目(LJYL017,LR2016045)
第一作者简介:
贾迪, 1982年生, 男, 副教授, 主要研究方向为立体匹配与3维重建、视觉空间定位。E-mail:lntu_jiadi@163.com;
赵明远, 男, 硕士研究生, 主要研究方向为像对立体稠密匹配。E-mail:zju_zmy@163.com; 杨宁华, 男, 硕士研究生, 主要研究方向为图像匹配。E-mail:lgdyangninghua@163.com.
中图法分类号: TP391
文献标识码: A
文章编号: 1006-8961(2019)11-1962-10
|
摘要
目的 像对稠密匹配是视觉定位、影像融合、超分辨率重建等高级图像处理技术的基础,由于像对可能受多种摄影条件的影响,导致难以获得高效的稠密匹配结果,为此提出一种结合密度聚类平滑约束与三角网等比例剖分的像对稠密匹配方法。方法 为了快速获得同名点集,采用ORB(oriented FAST and rotated BRIEF)算法获取稀疏匹配点集,利用积分图筛选出以该特征点为中心的邻域中密度直达的特征点数目,计算像对间每个特征点对的偏移角、位置信息以及欧氏距离后进行密度估计聚类,通过平滑约束条件扩充聚类中的特征点对,从而快速获得内点集。证明了三角剖分在仿射变换下的等比例性质,以内点集为基础构建三角网,利用该性质分别计算像对中对应三角网内部等比例点的位置,并利用这些等比例点校验两个三角区域的相似性,进一步提纯内点集。最后,利用提纯后的内点集计算稠密匹配点位置,作为最后的稠密匹配结果。结果 在多个具有尺度缩放、重复纹理、旋转的公共数据集上进行像对匹配实验,实验结果表明,本文方法具备一定的抗旋转、尺度变化与重复纹理能力,能够较好地避免由于某些局部外点造成仿射变换矩阵估计不准确而影响整体平面稠密匹配准确率的情况,同时保证快速获得足够稠密的匹配结果。结论 实验结果验证了本文方法的有效性与实用性,其结果可应用于后期高级图像处理技术中。
关键词
局部一致性估计; 稠密匹配; 等比例三角剖分; 仿射变换; 平滑约束
Abstract
Objective The dense matching of image pairs is the basis of advanced image-processing technologies, such as vision localization, image fusion, and super-resolution reconstruction. Efficient dense matching results are difficult to obtain because image pairs may be affected by various photographic conditions. Therefore, this study proposes a dense matching method combining density-clustering, smoothing constraint and triangulation. Method First, ORB (oriented FAST and rotated BRIEF) algorithm is used to obtain the sparse matching point set and corresponding point set rapidly. The number of feature points directly arrived at by density in the neighborhood centered on the feature point is screened out by using integral graph. Connection distance, position information, and Euclidean distance are used to perform density estimation clustering, and the feature point pairs in the cluster are expanded by smoothing constraints. The inner point set is thus rapidly obtained. Second, the equal proportion property of triangulation under affine transformation, which plays a key role in the subsequent matching process, is proven. The triangulation is constructed on the basis of the interior point set. The positions of equal proportion points corresponding to the interior of the triangulation in two images to be matched are calculated by using this property, and the similarity of the two triangular regions is checked by the color information of these equal proportion points to purify the interior point set. Finally, the positions of dense matching points are calculated by using the refined interior point set as the final dense matching result. Result Three pairs of images with large photographic baseline in Mikoalyciz data set were selected for feature point-matching process and fast dense matching algorithm experiments. These groups of images were pairs of images with scaling, repeated texture, and rotation. All experiments were conducted in CPU and 8 GB memory with the main frequency of 3.3 GHz. In the environment of Windows compiler, MATLAB was selected as the development tool. Experimental results showed that the proposed method had the ability to resist rotation, scale change, and repeated texture by analyzing the matching accuracy and efficiency and could estimate local consistency and achieve dense matching of image pairs. The proposed method could also avoid the inaccurate estimation of affine transformation matrix caused by several local outliers and then affect the accuracy of global plane dense matching. The experimental parameters of grid-based motion statistics (GMS) and DeepMatching algorithm were the default values. The empirical values of the density-clustering-smoothing constraint purification interior point algorithm were obtained through considerable experimental experience. The GMS used the mesh-smoothing motion constraint method, although it could complete the local invariant point feature matching while eliminating the outliers, to ensure matching accuracy and improve processing speed. However, this method was restricted by the grid parameters and boundary conditions, which reduced the number of sparse matching points obtained by this method and affected the subsequent dense matching work. The number of sparse matching points was obviously more than that of the GMS matching points. The advantage of DeepMatching algorithm was that it did not depend strongly on continuity constraints and monotonicity. Nevertheless, its time complexity was high and operation time was long because the dense results obtained by each layer were checked step by step by using pyramid architecture. The density of the experimental results was higher than that of DeepMatching matching results, and the interior point purity was higher after smoothing constraints and equal proportional triangulation constraints. Obvious outliers existed in the DeepMatching matching results. The dense matching range of the proposed method was not as wide as that of DeepMatching. Methods with high sparse matching performance (e.g., affine scale-invariant feature transform) can effectively solve this problem owing to the distribution of sparse matching points. The memory and time requirements of the proposed algorithm increased linearly with an increase in image size. The matching time of this method increased slowly. The difference between the processing times of this algorithm and of DeepMatching was increasingly obvious. The accuracy gap between the proposed algorithm and DeepMatching algorithm was also obvious and stable. With the increase in image size to 512, the accuracy of the proposed algorithm reached 0.9 (error 10 pixels). This algorithm was superior to DeepMatching in time efficiency and accuracy. Particularly when dealing with large-scale images, it could guarantee high accuracy and greatly shorten the processing time of dense matching. Therefore, the proposed method not only improved the number of sparse matching points but also enhanced the execution speed, accuracy, and efficiency in processing large-scale images. Conclusion The experimental results verified the efficiency and practicability of the proposed method. In the future, this method will be integrated into advanced image processing, such as 3D reconstruction and super-resolution reconstruction.
Key words
local consistency estimation; dense matching; equal proportion triangulation; affine transformation; smooth constraint
0 引言
拍摄场景离摄像机距离较远时,可以将远处的场景近似成一个平面,例如从卫星上拍摄的地面景物即满足这样的条件,拍摄的这类像片经常用于影像融合[1]和超分辨率重建[2],常以稀疏匹配点为基础对像片进行稠密化匹配,建立像素点间的对应关系。通过匹配点计算出的变换矩阵,尽管可以通过RANSAC (random sample consensus)[3]、TCH (two-column histogram)[4]、VFC (vector field consensus)[5]等方法剔除部分外点,但目前依然没有一种方法可以完全提纯准确的内点。如果匹配点中部分匹配点出现偏差,就会造成仿射变换矩阵估计不准确问题,从而影响后续高级图像处理。为了避免该问题,提出一种平滑约束与三角网等比例剖分像对稠密匹配方法,目的是避免由于某些局部外点造成仿射变换矩阵估计不准确而影响整体平面稠密匹配准确率的问题。如图 1中,图 1 (d)为图 1 (a)经过磋切变换得到的图像,图 1(b)为图 1 (a)与图 1 (d)的SURF特征点匹配结果,通过该结果估计仿射变换矩阵
为了避免由于某些局部外点造成仿射变换矩阵估计不准确而影响整体平面稠密匹配准确率,同时保证快速获得足够稠密的匹配结果,提出如下创新性方法:1)对于GMS (grid-based motion statistics)[6]这种快速稀疏匹配算法,采用密度估计聚类的思想,考虑聚类中特征点对的一致性与平滑性,并利用积分图加速对聚类中的匹配点对进行扩充,从而快速获得数量更多的同名点集。2)证明了仿射变换条件下三角网等比例剖分的性质,利用该性质分别计算两幅待匹配图像中对应三角网内部等比例点的位置,通过这些等比例点校验两个三角区域的相似性,以此进一步剔除外点并获得内点集并计算稠密匹配点的位置,作为最后的稠密匹配结果。
1 算法描述
1.1 密度聚类平滑约束提纯内点
通常采用稀疏匹配方法[7-10]获得的匹配点对存在较多误匹配点,提纯特征匹配结果中的内点依然存在困难,虽然采用文献[3-5]等方法可以剔除部分外点, 提高准确率,但同时也降低了特征匹配的整体速度。Bian等人于2017年提出一种GMS方法[6],运用网格平滑运动约束方法,可以在完成局部不变点特征匹配的同时剔除外点,从而能够在保证匹配准确率的同时提高处理速度。然而,由于该方法受到了网格参数取值及边界条件的制约,从而降低了获得稀疏匹配点的数量,影响后续的稠密匹配工作。如图 2所示,由于网格参数取值的不同,蓝色方格内的特征点数不足以支持一致性的判定,在绿色虚线方格内,一致性判定则成立。
为了提高稀疏匹配点的数量,采用特征点匹配算法(如图 3所示)解决上述问题。图 3中(
算法的实现过程如下:
首先,以从源图像
$f\left( {{N_i}, {N_j}} \right) = \left\{ {\begin{array}{*{20}{l}} 1&{{f_{\rm{d}}}\left( {{N_i}, {N_j}} \right)\varepsilon , (i \ne j)}\\ 0&{{\rm{其他}}} \end{array}} \right. $ | (1) |
式中,
$C_{N_{i}}=\iint_{D(x, y)+A(x, y)-B(x, y)-C(x, y)} f(x, y) \mathrm{d} x \mathrm{d} y $ | (2) |
式中,
然后,分别对每个
$\begin{array}{*{20}{c}} {{E_{ij}} = }\\ {\sqrt {{{\left( {{D_i} - {D_j}} \right)}^2} + {{\left( {{\theta _i} - {\theta _j}} \right)}^2} + {{\left( {{x_i} - {x_j}} \right)}^2} + {{\left( {{y_i} - {y_j}} \right)}^2}} }\\ {i, j = 1, \cdots , n, \quad i \ne j} \end{array} $ | (3) |
式中,
$\boldsymbol{M}_{n \times n}=\left(\begin{array}{cccc}{0} & {E_{12}} & {\cdots} & {E_{1 n}} \\ {E_{21}} & {0} & {\cdots} & {E_{2 n}} \\ {} & {} & {\vdots} & {} \\ {E_{n 1}} & {E_{n 2}} & {\cdots} & {0}\end{array}\right), E_{i j}=E_{j i}, i \neq j $ | (4) |
式中,
最后,根据平滑性约束条件来决定是否对得到的聚类集合
$\begin{array}{l} \left\| {\frac{1}{m}\sum\limits_{j = 1}^m {{{\left( {{E_{ij}} - \mu } \right)}^2}} - \varepsilon } \right\|\\ \alpha \left( {\mu = \frac{2}{{n(n - 1)}}\sum\limits_{i = 1}^n {\sum\limits_{j = i + 1}^n {{E_{ij}}} } } \right) \end{array} $ | (5) |
式中,
$\mathit{\boldsymbol{A}} = \bigcup\limits_i {\left\{ {\left( {{N_i}, {M_i}} \right) \cup {\mathit{\boldsymbol{P}}_i}} \right\}} , \left( {{N_i}, {M_i}} \right) \in \mathit{\boldsymbol{C}} $ | (6) |
1.2 三角网等比例剖分稠密匹配
剔除稀疏匹配的外点后,可以当前内点集为基础进行稠密匹配的工作。Barnes等人[11]为了增强图像边缘的平滑约束力,给出了一种PatchMatch方法,巧妙地利用图像中与边缘部分最佳匹配的其他区域来填补图像边缘。通过随机初始化,只要有一个块匹配正确,则可以传播给邻域块,通过迭代最终对所有部分都找到最相似的匹配区域,然而比较每个块的相似度导致其处理时间较长。Revaud等人[12]提出了DeepMatching方法,该算法的优势在于对连续性约束和单调性依赖性不强,由于其利用金字塔体系结构逐步校验每层获得的稠密化结果,因此时间复杂度高,运算时间长。文献[13]提出一种近景影像三角网内插点密集匹配方法,该方法认为理想状态下,同名三角形的重心即为同名点,后续再经过彩色信息相似性约束和极线约束进行筛选。该方法对于视差变化小的近景影像,或者视频序列影像效果较为理想,对于视差变化较大的近景影像难以适用。针对上述问题,文献[14]提出一种简单有效的迭代三角网约束的近景影像密集匹配算法。该方法以初始同名点构建的Delaunay三角网作为匹配基础,以左影像三角形重心作为匹配基元,综合多重约束确定右影像上的同名点。迭代过程中整体构网,以三角形面积为间接约束,以是否有新的同名点产生为直接约束作为迭代停止的条件,取得较好的密集匹配结果。该方法存在的问题如图 5所示,对蓝色三角网迭代求解2次中心后,红色线条处的像素点将无法在后续求解决过程中完成稠密匹配,随着迭代次数的增加,未能匹配像素的数量会越来越多,从而降低了匹配稠密度。
文献[15]利用三角剖分和仿射约束计算同名匹配点,其方法为产生泊松分布的抽样点构造左右图三角网,计算每一个对应三角网的仿射变换矩阵
假设
该方法的关键理论是根据图形的仿射变换性质,即图形在两个方向上任意伸缩变换,仍然可以保持原来的线共点、点共线关系不变,而且平面上三角形的重心具有仿射不变性。如图 6所示,Δ
证明过程如下:
已知:假设
证明:
由
则
可得
由
同理:
因此可得
同理,当
${x_n}^i = {x_o} - \frac{{{x_o} - {x_p}}}{n} $ | (7) |
${y_n}^{i}=y_{o}-\frac{y_{o}-y_{p}}{n} $ | (8) |
Δ
$x=\frac{x_{o}+x_{p}+x_{q}}{3} $ | (9) |
$y=\frac{y_{o}+y_{p}+y_{q}}{3} $ | (10) |
已经证明这些等比例点在仿射变换下的对应性,为了快速实现像对的稠密工作,首先获取三角区域
$\begin{array}{*{20}{c}} {\mathit{\Omega }(p, r) = \left\{ {\frac{1}{m}\mathit{disp}\left( {Tr{i_1}(p), \mathit{Tr}{i_2}(p)} \right) \le r} \right\}}\\ {\mathit{disp}\left( {\mathit{Tr}{i_1}(p), \mathit{Tr}{i_2}(p)} \right) = }\\ {\sqrt[{\mathop \sum \limits_{j = 1}^m }]{{\begin{array}{*{20}{c}} {\left( {{{\left( {Tri_1^i\left( {{p_j}} \right)} \right)}_R} - \left( {Tri_2^i\left( {{p_j}} \right)} \right)_R^2 + } \right.}\\ {\left( {{{\left( {Tri_1^i\left( {{p_j}} \right)} \right)}_G} - \left( {Tri_2^i\left( {{p_j}} \right)} \right)_G^2 + } \right.}\\ {\left( {{{\left( {Tri_1^i\left( {{p_j}} \right)} \right)}_B} - \left( {Tri_2^i\left( {{p_j}} \right)} \right)_B^2} \right.} \end{array}}}} \end{array} $ | (11) |
式中,
1.3 伪代码描述
结合1.1与1.2节理论部分的描述,给出如下算法流程。
算法1密度聚类平滑约束提纯内点
输入:待匹配图像
输出:内点集
1) 将
2) 建立[
3) 计算
4) 遍历
5) 以
6) 每个匹配对
7) 对邻域内的匹配点进行平滑约束,采用[
算法2三角网等比例剖分稠密匹配
输入:内点集
输出:三角网稠密匹配点的坐标
1) 构建
2) 根据
3) 计算三角网
4) 计算三角网
5) 三角网中包含了
for
if (
end
end
6) 根据内点集重新构造
7) 重新插入等比例点(同步骤4)),并输出
2 实验结果与分析
2.1 实验结果
GMS和DeepMatching实验参数的选取分别与文献[10]和文献[12]方法相同,设经验值
图 7为本次实验选取的像对,图 7(a)是一对具有旋转的宽基线像对,图 7(b)是一对具有尺度缩放、旋转、重复纹理的像对,图 7(c)是一对具有重复纹理的像对。
图 8是Graf像对匹配结果,图 8(a)为GMS的匹配结果,图 8(b)是采用本文特征点匹配的实验结果,其中,本文实验结果在黄色方框区域内匹配点的数量上明显多于GMS匹配结果,从实验上验证了2.1节给出方法可以获得更多的稀疏匹配点。图 8(c)是通过DeepMatching方法得到的匹配结果,图 8(d)是本文的稠密匹配方法结果,可以明显看到本文实验结果的稠密度高于DeepMatching匹配结果。
图 9是一组具有尺度缩放、旋转、重复纹理的像对的实验对比结果。由图 9可见,采用本文稠密方法得到的匹配结果稠密度不仅高于DeepMatching的匹配结果,且经过2.1节的平滑约束与2.2节的等比例三角网约束后,内点提纯度更高,而在DeepMatching的实验结果中则存在明显的外点,如图 9(c)右图右下角区域所示。
图 10是一组具有较高重复纹理像对的实验对比结果,从稠密匹配结果的实验对比上看,由DeepMatching算法得到的图 10(c)右图右侧区域存在明显的外点。
从图 8—图 10总体上看,本文方法的稠密匹配范围没有DeepMatching的匹配范围大,主要受限于稀疏匹配点的分布,采用具备更高稀疏匹配性能的方法(如ASIFT等)则可更好地解决该问题。综合对比这些实验结果,本文提出的算法无论是匹配密度还是准确性,都具有较高水平。
2.2 时间效率与准确率分析
实验中,在4幅不同尺寸的图像上分别应用DeepMatching和本文算法,对两种算法的算法复杂度和准确度进行比较。本文算法在内存需求和时间需求上随着图像尺寸的增加都呈线性增长,执行时间曲线如图 11,精度曲线如图 12所示。
由图 11可见,DeepMatching方法的运行时间均高于本文方法,且随着图像尺寸的增加,本文方法的匹配时间增长更慢,远低于DeepMatching的处理时间。
图 11中,图像尺寸在128~512像素之间时,两种算法运行时间差别不大,而图像尺寸增长至512像素之后,DeepMatching算法的时间曲线斜率发生了明显的变化,两种算法的处理时间差距不断增加。
图 12中,采用本文方法在图像尺寸128~1 024像素之间获得了较高配准精度,当图像尺寸选择在256~1 024像素之间时,与DeepMatching算法的精度差距较为明显且稳定。随着图像尺寸增大到512像素时,本文算法精度已达到0.9(误差为10像素)。
由此可见,本文算法在时间效率和准确率上优于DeepMatching算法,尤其是在处理大尺寸图像时可以在保证较高准确度的同时极大缩短稠密匹配的处理时间。
3 结论
本文针对现有像对稠密匹配存在的问题,提出一种结合密度聚类平滑约束与三角网等比例剖分的像对稠密匹配方法。利用ORB匹配算法稀疏匹配速度快的优势,结合特征点对间局部关系平滑一致性原理,通过密度聚类与积分图方法从速度与质量上解决ORB外点多的问题,从而获得足够多的内点集。以内点集为控制点利于在一系列三角区域内插入等比例点,进行像对的内点集二次提纯与稠密匹配工作,并从数学角度证明了该方法的合理性。实验结果表明,本文方法与DeepMatching方法相比,匹配的稠密程度、准确率、时空效率更高,可以在影像融合、超分辨率重建等高级图象处理领域中起到重要的作用。
参考文献
-
[1] Li B, Ming D L, Yan W W, et al. Image matching based on two-column histogram hashing and improved ransac[J]. IEEE Geoscience and Remote Sensing Letters, 2014, 11(8): 1433–1437. [DOI:10.1109/LGRS.2013.2295115]
-
[2] Barnes C, Shechtman E, Goldman D B, et al. The generalized patchmatch correspondence algorithm[C]//Proceedings of European Conference on Computer Vision. Berlin: Springer-Verlag, 2010: 29-43.[DOI: 10.1007/978-3-642-15558-1_3]
-
[3] Bian J W, Lin W Y, Matsushita Y, et al. GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 2828-2837.[DOI: 10.1109/CVPR.2017.302]
-
[4] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91–110. [DOI:10.1023/b:visi.0000029664.99615.94]
-
[5] Bay H, Tuytelaars T, Van Gool L. SURF: Speeded up robust features[C]//Proceedings of European Conference on Computer Vision. Berlin: Springer, 2006: 404-417.[sDOI: 10.1007/11744023_32]
-
[6] Ma J Y, Zhao J, Tian J W, et al. Robust point matching via vector field consensus[J]. IEEE Transactions on Image Processing, 2014, 23(4): 1706–1721. [DOI:10.1109/TIP.2014.2307478]
-
[7] Yi K M, Trulls E, Lepetit V, et al. Lift: Learned invariant feature transform[C]//Proceedings of European Conference on Computer Vision. Cham: Springer, 2016: 1-16.[DOI: 10.1007/978-3-319-46466-4_28]
-
[8] Mikolajczyk K, Schmid C. A performance evaluation of local descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1615–1630. [DOI:10.1109/TPAMI.2005.188]
-
[9] Mikolajczyk K, Tuytelaars T, Schmid C, et al. A comparison of affine region detectors[J]. International Journal of Computer Vision, 2005, 65(1-2): 43–72. [DOI:10.1007/s11263-005-3848-x]
-
[10] Zhao Q S, Wu X Q, Bu W. Contactless palmprint verification based on sift and iterative ransac[C]//Proceedings of 2013 IEEE International Conference on Image Processing. Melbourne, VIC, Australia: IEEE, 2013: 4186-4189.[DOI: 10.1109/ICIP.2013.6738862]
-
[11] Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011: 2564-2571.[DOI: 10.1109/ICCV.2011.6126544]
-
[12] Revaud J, Weinzaepfel P, Harchaoui Z, et al. DeepMatching:hierarchical deformable dense matching[J]. International Journal of Computer Vision, 2016, 120(3): 300–323. [DOI:10.1007/s11263-016-0908-3]
-
[13] Wang J X, Zhang J, Zhang X. A dense matching algorithm of close-range images constrained by iterative triangle network[J]. Journal of Signal Processing, 2018, 34(3): 347–356. [王竞雪, 张晶, 张雪. 迭代三角网约束的近景影像密集匹配[J]. 信号处理, 2018, 34(3): 347–356. ] [DOI:10.16798/j.issn.1003-0530.2018.03.012]
-
[14] Wang X J, Xing F, Liu F. Stereo matching of objects with same features based on delaunay triangulation and affine constraint[J]. Acta Optica Sinica, 2016, 36(11): 1115044–1. [王向军, 邢峰, 刘峰. Delaunay三角剖分和仿射约束的特征相同多物体同名点立体匹配[J]. 光学学报, 2016, 36(11): 1115044–1. ] [DOI:10.3788/AOS201636.1115004]
-
[15] Xu N, Xiao X Y, You H J, et al. A pansharpening method based on HCT and joint sparse model[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(4): 434–441. [许宁, 肖新耀, 尤红建, 等. HCT变换与联合稀疏模型相结合的遥感影像融合[J]. 测绘学报, 2016, 45(4): 434–441. ] [DOI:10.11947/j.AGCS.2016.20150372]
-
[16] Zhu H, Song W D, Tan H, et al. Remote sensing images super resolution reconstruction based on multi-scale detail enhancement[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(9): 1081–1088. [朱红, 宋伟东, 谭海, 等. 多尺度细节增强的遥感影像超分辨率重建[J]. 测绘学报, 2016, 45(9): 1081–1088. ] [DOI:10.11947/j.AGCS.2016.20150451]
-
[17] Zhu H, Song W D, Yang D, et al. Dense matching method of inserting point into the delaunay triangulation for close-range image[J]. Science of Surveying and Mapping, 2016, 41(4): 19–23. [朱红, 宋伟东, 杨冬, 等. 近景影像三角网内插点密集匹配方法[J]. 测绘科学, 2016, 41(4): 19–23. ] [DOI:10.16251/j.cnki.1009-2307.2016.04.005]