平滑约束与三角网比例剖分像对稠密匹配
Image dense matching based on smooth constraint and triangulation proportion
- 2019年24卷第11期 页码:1962-1971
收稿:2019-03-19,
修回:2019-5-30,
录用:2019-6-6,
纸质出版:2019-11-16
DOI: 10.11834/jig.190079
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-03-19,
修回:2019-5-30,
录用:2019-6-6,
纸质出版:2019-11-16
移动端阅览
目的
2
像对稠密匹配是视觉定位、影像融合、超分辨率重建等高级图像处理技术的基础,由于像对可能受多种摄影条件的影响,导致难以获得高效的稠密匹配结果,为此提出一种结合密度聚类平滑约束与三角网等比例剖分的像对稠密匹配方法。
方法
2
为了快速获得同名点集,采用ORB(oriented FAST and rotated BRIEF)算法获取稀疏匹配点集,利用积分图筛选出以该特征点为中心的邻域中密度直达的特征点数目,计算像对间每个特征点对的偏移角、位置信息以及欧氏距离后进行密度估计聚类,通过平滑约束条件扩充聚类中的特征点对,从而快速获得内点集。证明了三角剖分在仿射变换下的等比例性质,以内点集为基础构建三角网,利用该性质分别计算像对中对应三角网内部等比例点的位置,并利用这些等比例点校验两个三角区域的相似性,进一步提纯内点集。最后,利用提纯后的内点集计算稠密匹配点位置,作为最后的稠密匹配结果。
结果
2
在多个具有尺度缩放、重复纹理、旋转的公共数据集上进行像对匹配实验,实验结果表明,本文方法具备一定的抗旋转、尺度变化与重复纹理能力,能够较好地避免由于某些局部外点造成仿射变换矩阵估计不准确而影响整体平面稠密匹配准确率的情况,同时保证快速获得足够稠密的匹配结果。
结论
2
实验结果验证了本文方法的有效性与实用性,其结果可应用于后期高级图像处理技术中。
Objective
2
The dense matching of image pairs is the basis of advanced image-processing technologies
such as vision localization
image fusion
and super-resolution reconstruction. Efficient dense matching results are difficult to obtain because image pairs may be affected by various photographic conditions. Therefore
this study proposes a dense matching method combining density-clustering
smoothing constraint and triangulation.
Method
2
First
ORB (oriented FAST and rotated BRIEF) algorithm is used to obtain the sparse matching point set and corresponding point set rapidly. The number of feature points directly arrived at by density in the neighborhood centered on the feature point is screened out by using integral graph. Connection distance
position information
and Euclidean distance are used to perform density estimation clustering
and the feature point pairs in the cluster are expanded by smoothing constraints. The inner point set is thus rapidly obtained. Second
the equal proportion property of triangulation under affine transformation
which plays a key role in the subsequent matching process
is proven. The triangulation is constructed on the basis of the interior point set. The positions of equal proportion points corresponding to the interior of the triangulation in two images to be matched are calculated by using this property
and the similarity of the two triangular regions is checked by the color information of these equal proportion points to purify the interior point set. Finally
the positions of dense matching points are calculated by using the refined interior point set as the final dense matching result.
Result
2
Three pairs of images with large photographic baseline in Mikoalyciz data set were selected for feature point-matching process and fast dense matching algorithm experiments. These groups of images were pairs of images with scaling
repeated texture
and rotation. All experiments were conducted in CPU and 8 GB memory with the main frequency of 3.3 GHz. In the environment of Windows compiler
MATLAB was selected as the development tool. Experimental results showed that the proposed method had the ability to resist rotation
scale change
and repeated texture by analyzing the matching accuracy and efficiency and could estimate local consistency and achieve dense matching of image pairs. The proposed method could also avoid the inaccurate estimation of affine transformation matrix caused by several local outliers and then affect the accuracy of global plane dense matching. The experimental parameters of grid-based motion statistics (GMS) and DeepMatching algorithm were the default values. The empirical values of the density-clustering-smoothing constraint purification interior point algorithm were obtained through considerable experimental experience. The GMS used the mesh-smoothing motion constraint method
although it could complete the local invariant point feature matching while eliminating the outliers
to ensure matching accuracy and improve processing speed. However
this method was restricted by the grid parameters and boundary conditions
which reduced the number of sparse matching points obtained by this method and affected the subsequent dense matching work. The number of sparse matching points was obviously more than that of the GMS matching points. The advantage of DeepMatching algorithm was that it did not depend strongly on continuity constraints and monotonicity. Nevertheless
its time complexity was high and operation time was long because the dense results obtained by each layer were checked step by step by using pyramid architecture. The density of the experimental results was higher than that of DeepMatching matching results
and the interior point purity was higher after smoothing constraints and equal proportional triangulation constraints. Obvious outliers existed in the DeepMatching matching results. The dense matching range of the proposed method was not as wide as that of DeepMatching. Methods with high sparse matching performance (e.g.
affine scale-invariant feature transform) can effectively solve this problem owing to the distribution of sparse matching points. The memory and time requirements of the proposed algorithm increased linearly with an increase in image size. The matching time of this method increased slowly. The difference between the processing times of this algorithm and of DeepMatching was increasingly obvious. The accuracy gap between the proposed algorithm and DeepMatching algorithm was also obvious and stable. With the increase in image size to 512
the accuracy of the proposed algorithm reached 0.9 (error 10 pixels). This algorithm was superior to DeepMatching in time efficiency and accuracy. Particularly when dealing with large-scale images
it could guarantee high accuracy and greatly shorten the processing time of dense matching. Therefore
the proposed method not only improved the number of sparse matching points but also enhanced the execution speed
accuracy
and efficiency in processing large-scale images.
Conclusion
2
The experimental results verified the efficiency and practicability of the proposed method. In the future
this method will be integrated into advanced image processing
such as 3D reconstruction and super-resolution reconstruction.
Li B, Ming D L, Yan W W, et al. Image matching based on two-column histogram hashing and improved ransac[J]. IEEE Geoscience and Remote Sensing Letters, 2014, 11(8):1433-1437.[DOI:10.1109/LGRS.2013.2295115]
Barnes C, Shechtman E, Goldman D B, et al. The generalized patchmatch correspondence algorithm[C]//Proceedings of European Conference on Computer Vision. Berlin: Springer-Verlag, 2010: 29-43.[ DOI: 10.1007/978-3-642-15558-1_3 http://dx.doi.org/10.1007/978-3-642-15558-1_3 ]
Bian J W, Lin W Y, Matsushita Y, et al. GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 2828-2837.[ DOI: 10.1109/CVPR.2017.302 http://dx.doi.org/10.1109/CVPR.2017.302 ]
Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.[DOI:10.1023/b:visi.0000029664.99615.94]
Bay H, Tuytelaars T, Van Gool L. SURF: Speeded up robust features[C]//Proceedings of European Conference on Computer Vision. Berlin: Springer, 2006: 404-417.[s DOI: 10.1007/11744023_32 http://dx.doi.org/10.1007/11744023_32 ]
Ma J Y, Zhao J, Tian J W, et al. Robust point matching via vector field consensus[J]. IEEE Transactions on Image Processing, 2014, 23(4):1706-1721.[DOI:10.1109/TIP.2014.2307478]
Yi K M, Trulls E, Lepetit V, et al. Lift: Learned invariant feature transform[C]//Proceedings of European Conference on Computer Vision. Cham: Springer, 2016: 1-16.[ DOI: 10.1007/978-3-319-46466-4_28 http://dx.doi.org/10.1007/978-3-319-46466-4_28 ]
Mikolajczyk K, Schmid C. A performance evaluation of local descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10):1615-1630.[DOI:10.1109/TPAMI.2005.188]
Mikolajczyk K, Tuytelaars T, Schmid C, et al. A comparison of affine region detectors[J]. International Journal of Computer Vision, 2005, 65(1-2):43-72.[DOI:10.1007/s11263-005-3848-x]
Zhao Q S, Wu X Q, Bu W. Contactless palmprint verification based on sift and iterative ransac[C]//Proceedings of 2013 IEEE International Conference on Image Processing. Melbourne, VIC, Australia: IEEE, 2013: 4186-4189.[ DOI: 10.1109/ICIP.2013.6738862 http://dx.doi.org/10.1109/ICIP.2013.6738862 ]
Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011: 2564-2571.[ DOI: 10.1109/ICCV.2011.6126544 http://dx.doi.org/10.1109/ICCV.2011.6126544 ]
Revaud J, Weinzaepfel P, Harchaoui Z, et al. DeepMatching:hierarchical deformable dense matching[J]. International Journal of Computer Vision, 2016, 120(3):300-323.[DOI:10.1007/s11263-016-0908-3]
Wang J X, Zhang J, Zhang X. A dense matching algorithm of close-range images constrained by iterative triangle network[J]. Journal of Signal Processing, 2018, 34(3):347-356.
王竞雪, 张晶, 张雪.迭代三角网约束的近景影像密集匹配[J].信号处理, 2018, 34(3):347-356. [DOI:10.16798/j.issn.1003-0530.2018.03.012]
Wang X J, Xing F, Liu F. Stereo matching of objects with same features based on delaunay triangulation and affine constraint[J]. Acta Optica Sinica, 2016, 36(11):1115044-1-11150448.
王向军, 邢峰, 刘峰. Delaunay三角剖分和仿射约束的特征相同多物体同名点立体匹配[J].光学学报, 2016, 36(11):1115044-1-1115044-8. [DOI:10.3788/AOS201636.1115004]
Xu N, Xiao X Y, You H J, et al. A pansharpening method based on HCT and joint sparse model[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(4):434-441.
许宁, 肖新耀, 尤红建, 等. HCT变换与联合稀疏模型相结合的遥感影像融合[J].测绘学报, 2016, 45(4):434-441. [DOI:10.11947/j.AGCS.2016.20150372]
Zhu H, Song W D, Tan H, et al. Remote sensing images super resolution reconstruction based on multi-scale detail enhancement[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(9):1081-1088.
朱红, 宋伟东, 谭海, 等.多尺度细节增强的遥感影像超分辨率重建[J].测绘学报, 2016, 45(9):1081-1088. [DOI:10.11947/j.AGCS.2016.20150451]
Zhu H, Song W D, Yang D, et al. Dense matching method of inserting point into the delaunay triangulation for close-range image[J]. Science of Surveying and Mapping, 2016, 41(4):19-23.
朱红, 宋伟东, 杨冬, 等.近景影像三角网内插点密集匹配方法[J].测绘科学, 2016, 41(4):19-23. [DOI:10.16251/j.cnki.1009-2307.2016.04.005]
相关作者
相关机构
京公网安备11010802024621