开放道路中匹配高精度地图的在线相机外参标定
Online extrinsic camera calibration based on high-definition map matching on public roadway
- 2021年26卷第1期 页码:208-217
纸质出版日期: 2021-01-16 ,
录用日期: 2020-10-27
DOI: 10.11834/jig.200432
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2021-01-16 ,
录用日期: 2020-10-27
移动端阅览
廖文龙, 赵华卿, 严骏驰. 开放道路中匹配高精度地图的在线相机外参标定[J]. 中国图象图形学报, 2021,26(1):208-217.
Wenlong Liao, Huaqing Zhao, Junchi Yan. Online extrinsic camera calibration based on high-definition map matching on public roadway[J]. Journal of Image and Graphics, 2021,26(1):208-217.
目的
2
相机外参标定是ADAS(advanced driver-assistance systems)等应用领域的关键环节。传统的相机外参标定方法通常依赖特定场景和特定标志物,无法实时实地进行动态标定。部分结合SLAM(simultaneous localization and mapping)或VIO(visual inertia odometry)的外参标定方法依赖于点特征匹配,且精度往往不高。针对ADAS应用,本文提出了一种相机地图匹配的外参自校正方法。
方法
2
首先通过深度学习对图像中的车道线进行检测提取,数据筛选及后处理完成后,作为优化问题的输入;其次通过最近邻域解决车道线点关联,并在像平面内定义重投影误差;最后,通过梯度下降方法迭代求解最优的相机外参矩阵,使得像平面内检测车道线与地图车道线真值重投影匹配误差最小。
结果
2
在开放道路上的测试车辆显示,本文方法经过多次迭代后收敛至正确的外参,其旋转角精度小于0.2°,平移精度小于0.2 m,对比基于消失点或VIO的标定方法(精度为2.2°及0.3 m),本文方法精度具备明显优势。同时,在相机外参动态改变时,所提出方法可迅速收敛至相机新外参。
结论
2
本文方法不依赖于特定场景,支持实时迭代优化进行外参优化,有效提高了相机外参精确度,精度满足ADAS需求。
Objective
2
Camera calibration is one of the key factors of the perception in advanced driver-assistance systems (ADAS) and many other applications. Traditional camera calibration methods and even some state-of-the-art calibration algorithms
which are currently widely used in factories
strongly rely on specific scenes and specific markers. Existing methods to calibrate the extrinsic parameters of the camera are inconvenient and inaccurate
and current algorithms have some obvious disadvantages
which might cause serious accidents
damage the vehicle
or threaten the safety of passengers. Theoretically
once calibrated
the extrinsic parameters of the camera
including the position and the posture of camera installation
will be fixed and stable. However
the extrinsic parameters of a camera change throughout the lifetime of a vehicle. Real-time dynamic calibration is useful in cases when vehicles are transported or when cameras are removed for maintenance or replacement. Other extrinsic parameter calibration methods solves the estimation by simultaneous localization and mapping or visual inertia odometry (VIO) technologies. These methods try to extract point features and match points with the same characters
and the spatial transformation of different frames is calculated accordingly from the matched point pairs. However
according to the absence of texture information such as when one is in an indoor environment
the accuracy of extrinsic parameters is not always satisfactory. The common situation is that the algorithm cannot obtain any feature from the existing frames or the features that are obtained are not enough to calculate the position. To solve this problem and achieve the requirement of ADAS
this paper proposes a self-calibrating method that is based on aligning the detected lanes by the camera with a high-definition (HD) map.
Method
2
Feature extraction is the first step of calibration. The most common feature extraction method is to acquire features from frames
calculate the gradient or other specific information of every single pixel
and select the pixels with the most significant values as the detected features. In this paper
we introduce a state-of-the-art algorithm that uses deep learning to detect lane points in the images grabbed from the camera. Some parts of the extrinsic parameters
including longitudinal translation
are unobservable when the vehicle is moving; thus
a data filtering and post-processing method is proposed. Images are classified into three classes: invalid frame
data frame
and key frame. The data filtering rule will efficiently divide the obtained frames into these three types according to the information the frame carries. Then
in the next step
the reprojection error (or loss) is defined in the imaging plane. The process consists of four steps: 1) The lane detected in the HD map is projected to the image plane
and the nearest neighborhood is associated with every detected lane point. This step is similar to feature matching
but it focuses only on the distance of the nearest potential match points. 2) The distance of points and the normal vectorial angle is calculated
and different weights are assigned based on different image types. 3) The geometric constraints of lanes in the image plane and frame of the camera are solved. The initial guess of the extrinsic parameter is determined; the guess is often imprecise and valid only in the cases when the lane is a straight line and the camera translation is known. 4) A gradient descent-based iterative optimal method is used to minimize the reprojection error
and the optimal extrinsic parameter could be determined at the same time. We use such methods to calibrate the camera extrinsic for the vehicles because of several reasons and advantages. The extrinsic parameters are calibrated by using gradient descent because extrinsic parameters are hypothesized to change slowly enough during the lifetime of a vehicle. Therefore
optimizing the extrinsic parameters by using gradient descent could maintain the accuracy of the current extrinsic parameters. Even when outliers occur
the system could remain stable for a period of time rather than have rapidly changing extrinsic parameters
which is considered dangerous when the vehicle is in motion. Deep learning is used to calibrate the extrinsic parameters because the lanes look different according to different road conditions. With any current method
losing some features of the lane points is common. However
the deep learning method does not have such problems; with enough training data
lanes in any extreme case could be used
even in totally different environments or in most cases of extreme weather.
Result
2
Experiments on an open road show that the designed loss function is meaningful and convex. With 250 iterations
the proposed method can converge to the true extrinsic parameter
and its rotation accuracy is 0.2° and the translation accuracy is 0.03 m. Compared with the VIO-based and another lane detection-based method
our approach is more accurate with the HD map information. Another experiment shows that the proposed method can quickly converge to the new true value when extrinsic parameters change dynamically.
Conclusion
2
With the use of lane detection
the proposed method does not depend on specific scenarios or marks. Through the matching of the detected lane and the HD map with numerical optimization
the calibration can be performed in real time
and it improves the accuracy of extrinsic parameters more significantly than other methods. The accuracy of the proposed method meets the requirements of ADAS
showing great value in the field of industry.
外参标定地图匹配车道线梯度下降在线标定
extrinsic parameter calibrationmap alignmentlanegradient descentonline calibration
Carrera G, Angeli A and Davison A J. 2011. SLAM-based automatic extrinsic calibration of a multi-camera rig//Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE: 2652-2659[DOI:10.1109/ICRA.2011.5980294http://dx.doi.org/10.1109/ICRA.2011.5980294]
Civera J, Bueno D R, Davison A J and Montiel J M M. 2009. Camera self-calibration for sequential Bayesian structure from motion//Proceedings of 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan: IEEE: 403-408[DOI:10.1109/ROBOT.2009.5152719http://dx.doi.org/10.1109/ROBOT.2009.5152719]
Dabral S, Kamath S, Appia V, Mody M, Zhang B Y and Batur U. 2014. Trends in camera based automotive driver assistance systems (ADAS)//The 57th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS). College Station, USA: IEEE: 1110-1115[DOI:10.1109/MWSCAS.2014.6908613http://dx.doi.org/10.1109/MWSCAS.2014.6908613]
Daniilidis K. 1999. Hand-eye calibration using dual quaternions. The International Journal of Robotics Research, 18(3):286-298[DOI:10.1177/02783649922066213]
Esquivel S, Woelk F and Koch R. 2007. Calibration of a multi-camera rig from non-overlapping views//The 29th DAGM Symposium on Pattern Recognition Symposium. Heidelberg, Germany: Springer: 82-91[DOI:10.1007/978-3-540-74936-3_9http://dx.doi.org/10.1007/978-3-540-74936-3_9]
Ghallabi F, Nashashibi F, El-Haj-Shhade G and Mittet M A. 2018. LIDAR-based lane marking detection for vehicle positioning in an HD map//Proceedings of the 21st International Conference on Intelligent Transportation Systems (ISTC). Maui, USA: IEEE: 2209-2214[DOI:10.1109/ITSC.2018.8569951http://dx.doi.org/10.1109/ITSC.2018.8569951]
Knoop V L, De Bakker P F, Tiberius C C J M and Van Arem B. 2017. Lane determination with GPS precise point positioning. IEEE Transactions on Intelligent Transportation Systems, 18(9):2503-2513[DOI:10.1109/TITS.2016.2632751]
Li C, Song H S, Wu F F, Wang W and Wang X. 2019. Auto-calibration of the PTZ camera on the highway. Journal of Image and Graphics, 24(8):1391-1399
李婵, 宋焕生, 武非凡, 王伟, 王璇. 2019.高速公路云台相机的自动标定.中国图象图形学报, 24(8):1391-1399[DOI:10.11834/JIG.180599]
Li T, Zhang H P, Niu X J and Gao Z Z. 2017. Tightly-coupled integration of multi-GNSS single-frequency RTK and MEMS-IMU for enhanced positioning performance. Sensors, 17(11):#2462[DOI:10.3390/s17112462]
Muja M and Lowe D G. 2014. Scalable nearest neighbor algorithms for high dimensional data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(11):2227-2240[DOI:10.1109/TPAMI.2014.2321376]
Ouyang D S and Feng H Y. 2005. On the normal vector estimation for point cloud data from smooth surfaces.Computer-Aided Design, 37(10):1071-1079[DOI:10.1016/j.cad.2004.11.005]
Schoepflin T N and Dailey D J. 2003. Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation. IEEE Transactions on Intelligent Transportation Systems, 4(2):90-98[DOI:10.1109/TITS.2003.821213]
Triggs B, McLauchlan P F, Hartley R I and Fitzgibbon A W. 1999. Bundle adjustment-a modern synthesis//Proceedings of the International Workshop on Vision Algorithms: Theory and Practice. Greece: Springer: 298-372[DOI:10.1007/3-540-44480-7_21http://dx.doi.org/10.1007/3-540-44480-7_21]
Tsai R. 1987. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal on Robotics and Automation, 3(4):323-344[DOI:10.1109/JRA.1987.1087109]
Yang Z F and Shen S J. 2017. Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration. IEEE Transactions on Automation Science and Engineering, 14(1):39-51[DOI:10.1109/TASE.2016.2550621]
Zhang Z. 2000. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334[DOI:10.1109/34.888718]
Zhang Z Y. 2004. Camera calibration with one-dimensional objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(7):892-899[DOI:10.1109/TPAMI.2004.21]
Zhou X, Huang X Y and Li Y. 2003. Lane keeping and distance measurement based on monocular vision. Journal of Image and Graphics, 8(5):590-595
周欣, 黄席樾, 黎昱. 2003.基于单目视觉的高速公路车道保持与距离测量.中国图象图形学报, 8(5):590-595[DOI:10.3969/j.issn.1006-8961.2003.05.017]
Zhou Z W, Siddiquee M M R, Tajbakhsh N and Liang J M. 2018. UNet++: a nested u-net architecture for medical image segmentation//Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada, Spain: Springer: 3-11[DOI:10.1007/978-3-030-00889-5_1http://dx.doi.org/10.1007/978-3-030-00889-5_1]
相关作者
相关机构