双目区域视差快速计算及测距算法
Rapid calculation and ranging algorithm based on binocular region parallax
- 2019年24卷第9期 页码:1537-1545
收稿:2018-12-18,
修回:2019-4-4,
纸质出版:2019-09-16
DOI: 10.11834/jig.180639
移动端阅览

浏览全部资源
扫码关注微信
收稿:2018-12-18,
修回:2019-4-4,
纸质出版:2019-09-16
移动端阅览
目的
2
双目测距对水面无人艇自主避障以及视觉侦察具有重要意义,但视觉传感器成像易受光照环境及运动模糊等因素的影响,基于经典Census变换的立体匹配代价计算方法耗时长,且视差获取精度差,影响测距精度。为了提高测距精度并保证算法运行速度,提出一种用于双目测距的快速立体匹配算法。
方法
2
基于传统Census变换,提出一种新的比特串生成方法,在匹配点正方形支持窗口的各边等距各选3个像素点,共选出8个像素点,这8个像素点两两比较生成一个字节的比特串。将左右视场中的匹配点与待匹配点的比特串进行异或,得到两点的汉明距离,在各汉明距离中找到距离最小的像素点作为匹配像素点,两像素点的横坐标差为视差。本文采用区域视差计算的方法,在左右视场确定同一目标区域后进行视差提取和滤波,利用平均视差计算目标的距离。
结果
2
本文算法与基于传统Census变换的立体匹配视差获取方法相比,在运算速度方面优势明显,时间稳定在0.4 s左右,用时仅为传统Census变换算法的1/5。在Middlebury数据集中的图像对teddy和cones上进行的算法运行时间对比实验中,本文基于Census变换改进的算法比已有的基于Census变换的匹配算法在运行时间上快了近20 s。在实际双目测距实验中,采用本文算法在1019 m范围内测距误差在5%以内,根据无人艇的运动特点和避障要求,通过分析可知该算法的测距精度可以满足低速无人艇的避障需求。
结论
2
本文给出的基于改进Census变换的匹配算法在立体匹配速度上有大幅提高,提取目标视差用于测距,实际测距结果表明,本文算法能够满足水面无人艇的视觉避障要求。
Objective
2
The image-based ranging method is more concealed than traditional ranging methods
such as ultrasonic and radar methods. Ranging based on binocular vision for reconnaissance and obstacle avoidance is an important method for unmanned surface vehicles (USVs). However
visual sensor imaging is easily affected by illumination changes and motion blur. The calculated stereo matching cost based on classical Census transform is considerably high
and the stereo parallax accuracy is poor
thereby affecting the productivity and accuracy of ranging. A fast stereo matching and parallax computation algorithm based on improved Census transform for binocular ranging is proposed in this study to improve ranging accuracy and ensure rapid ranging speed.
Method
2
A new bit string generation method used in Census transform is proposed. The method selects three pixels at equal intervals on each edge of the square supporting window of the matching point. Eight pixels are selected on the square supporting window edges around the matching point. An eight-bit string is generated by this eight-pixel pairwise comparison and is used for the matching cost calculation between matching points. Then
the Hamming distance between matching points is obtained with the bit OR arithmetic operation between the eight-bit strings of the two matching points from the left and right fields of view separately. The two pixel points from different views with the smallest Hamming distance can be regarded as a pair of matched points. After the matched points are determined
the parallax between the matched points can be achieved easily. The average parallax of the target area in the reference and target images instead of the parallax of the entire area is calculated and adopted to obtain the target distance to reduce the computational complexity. Fortunately
for the stereo ranging used in USVs
the target images always occupy a certain area in two view fields
and the target area has high similarity. The difference in the contours of the targets is minimal
and the concourse can be used to identify the same target in the two views. When the same target area in the left and right fields of view is determined
the parallaxes of all the pixels in the target area are extracted
and the distance of the target is calculated with the average parallax of the target obtained.
Result
2
The computation cost of matching based on classical Census transform increases with the matching window. By contrast
the computation cost of matching based on the improved Census transform is stable. The proposed improved algorithm has evident speed advantage when the matching window is large. In the practical binocular ranging for USVs
a binocular image is initially pre-processed via methods such as de-noising and de-blurring. Then
fast stereo matching and parallax calculation based on the improved Census transform are carried out. Finally
the target distance is obtained according to the stereo parallax and binocular imaging model. The ranging error is less than 5% in the range of 1019 m according to the proposed algorithm. The binocular imaging ranging principle indicates that the error of the rapid stereo matching and parallax calculation based on the improved Census transform is not greater than 5%.
Conclusion
2
Experiment results show that the proposed matching algorithm based on the improved Census transform can greatly improve the speed of stereo matching. In the practical binocular ranging for USVs
the target area in the left and right fields of view is determined first
and then the average parallax of the target is calculated to obtain the target distance. The actual ranging results show that the distance error is less than 5% and that the proposed algorithm can satisfy the requirements of target ranging and obstacle avoidance for USVs.
Wang H Q, Wu M, Zhang Y B, et al. Effective stereo matching using reliable points based graph cut[C]//Proceedings of 2013 Visual Communications and Image Processing. Kuching, Malaysia: IEEE, 2013: 1-6.[ DOI: 10.1109/VCIP.2013.6706415 http://dx.doi.org/10.1109/VCIP.2013.6706415 ]
Yang Q X. A non-local cost aggregation method for stereo matching[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012: 1402-1409.[ DOI: 10.1109/CVPR.2012.6247827 http://dx.doi.org/10.1109/CVPR.2012.6247827 ]
Yao P, Zhang H, Xue Y B, et al. Iterative color-depth MST cost aggregation for stereo matching[C]//Proceedings of 2016 IEEE International Conference on Multimedia and Expo. Seattle, WA, USA: IEEE, 2016: 1-6.[ DOI: 10.1109/ICME.2016.7552942 http://dx.doi.org/10.1109/ICME.2016.7552942 ]
Mei X, Sun X, Dong W M, et al. Segment-tree based cost aggregation for stereo matching[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013: 313-320.[ DOI: 10.1109/CVPR.2013.47 http://dx.doi.org/10.1109/CVPR.2013.47 ]
Mattoccia S, Giardino S, Gambini A. Accurate and efficient cost aggregation strategy for stereo correspondence based on approximated joint bilateral filtering[C]//Proceedings of the 9th Asian Conference on Computer Vision. Xi'an, China: Springer, 2009: 371-380.[ DOI: 10.1007/978-3-642-12304-7_35 http://dx.doi.org/10.1007/978-3-642-12304-7_35 ]
Zhang K, Fang Y Q, Min D B, et al. Cross-scale cost aggregation for stereo matching[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE, 2014: 1590-1597.[ DOI: 10.1109/CVPR.2014.206 http://dx.doi.org/10.1109/CVPR.2014.206 ]
Mei X, Sun X, Zhou M C, et al. On building an accurate stereo matching system on graphics hardware[C]//Proceedings of 2011 IEEE International Conference on Computer Vision Workshops. IEEE, 2011: 467-474.[ DOI: 10.1109/ICCVW.2011.6130280 http://dx.doi.org/10.1109/ICCVW.2011.6130280 ]
Chen B, Chen H P, Li X H. Near real time linear stereo cost aggregation on GPU[J]. Journal of Image and Graphics, 2014, 19(10):1481-1489.
陈彬, 陈和平, 李晓卉. GPU近实时线性双目立体代价聚合[J].中国图象图形学报, 2014, 19(10):1481-1489. [DOI:10.11834/jig.20141010]
Hamzah R A, Ibrahim H. Literature survey on stereo vision disparity map algorithms[J]. Journal of Sensors, 2016, 2016(2):1-23.[DOI:10.1155/2016/8742920]
Lim J, Kim Y, Lee S. A census transform-based robust stereo matching under radiometricchanges[C]//Proceedings of 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Jeju, South Korea: IEEE, 2016: 1-4.[ DOI: 10.1109/APSIPA.2016.7820841 http://dx.doi.org/10.1109/APSIPA.2016.7820841 ]
Hosni A, Bleyer M, Gelautz M. Secrets of adaptive support weight techniques for local stereo matching[J]. Computer Vision and Image Understanding, 2013, 117(6):620-632.[DOI:10.1016/j.cviu.2013.01.007]
Zhu S P, Yan L N. Local stereo matching algorithm with efficient matching cost and adaptive guided image filter[J]. The Visual Computer, 2017, 33(9):1087-1102.[DOI:10.1007/s00371-016-1264-6]
Chang N Y C, Tsai T H, Hsu B H, et al. Algorithm and architecture of disparity estimation with mini-census adaptive support weight[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2010, 20(6):792-805.[DOI:10.1109/TCSVT.2010.2045814]
Lyu N Q, Song G H, Yang B W. Semi-global stereo matching algorithm based on feature fusion and its CUDA implementation[J]. Journal of Image and Graphics, 2018, 23(6):874-886.
吕倪祺, 宋广华, 杨波威.特征融合的双目半全局匹配算法及其并行加速实现[J].中国图象图形学报, 2018, 23(6):874-886. [DOI:10.11834/jig.170157]
Wang P, Zhou X, Peng R K, et al. Active contour model based on edge and region attributes for target contour extraction in SAR image[J]. Journal of Image and Graphics, 2014, 19(7):1095-1103.
王沛, 周鑫, 彭荣鲲, 等.结合边缘和区域的活动轮廓模型SAR图像目标轮廓提取[J].中国图象图形学报, 2014, 19(7):1095-1103. [DOI:10.11834/jig.20140714]
Liu S M, Huang Y P, Zhang R J. Pedestrian contour extraction and its recognition using stereovision and snake models[J]. Acta Optica Sinica, 2014, 34(5):#0533001.
刘述民, 黄影平, 张仁杰.基于立体视觉及蛇模型的行人轮廓提取及其识别[J].光学学报, 2014, 34(5):#0533001. [DOI:10.3788/aos201434.0533001]
Scharstein D, Szeliski R, Hirschmüller H. Middlebury stereo vision page[EB/OL].[2019-03-01] . http://vision.Middlebury.edu/stereo/ http://vision.Middlebury.edu/stereo/ .
Zhu S P, Yan L N, Li Z. Stereo matching algorithm based on improved census transform and dynamic programming[J]. Acta Optica Sinica, 2016, 36(4):208-216.
祝世平, 闫利那, 李政.基于改进Census变换和动态规划的立体匹配算法[J].光学学报, 2016, 36(4):208-216][DOI:10.3788/aos201636.0415001]
Zhang Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334.[DOI:10.1109/34.888718]
Zhong W B, Luo Y, Lu D H, et al. Design and implementation of communication navigation and control system for unmanned surface vehicle[J]. Shipbuilding of China, 2018, 59(1):207-215.
仲伟波, 罗炀, 卢道华, 等.无人艇通讯导航控制系统设计与实现[J].中国造船, 2018, 59(1):207-215. [DOI:10.3969/j.issn.1000-4882.2018.01.020]
相关作者
相关机构
京公网安备11010802024621