Current Issue Cover
开放道路中匹配高精度地图的在线相机外参标定

廖文龙1,2, 赵华卿2, 严骏驰1(1.上海交通大学, 上海 200240;2.安徽酷哇机器人有限公司, 芜湖 241010)

摘 要
目的 相机外参标定是ADAS(advanced driver-assistance systems)等应用领域的关键环节。传统的相机外参标定方法通常依赖特定场景和特定标志物,无法实时实地进行动态标定。部分结合SLAM(simultaneous localization and mapping)或VIO(visual inertia odometry)的外参标定方法依赖于点特征匹配,且精度往往不高。针对ADAS应用,本文提出了一种相机地图匹配的外参自校正方法。方法 首先通过深度学习对图像中的车道线进行检测提取,数据筛选及后处理完成后,作为优化问题的输入;其次通过最近邻域解决车道线点关联,并在像平面内定义重投影误差;最后,通过梯度下降方法迭代求解最优的相机外参矩阵,使得像平面内检测车道线与地图车道线真值重投影匹配误差最小。结果 在开放道路上的测试车辆显示,本文方法经过多次迭代后收敛至正确的外参,其旋转角精度小于0.2°,平移精度小于0.2 m,对比基于消失点或VIO的标定方法(精度为2.2°及0.3 m),本文方法精度具备明显优势。同时,在相机外参动态改变时,所提出方法可迅速收敛至相机新外参。结论 本文方法不依赖于特定场景,支持实时迭代优化进行外参优化,有效提高了相机外参精确度,精度满足ADAS需求。
关键词
Online extrinsic camera calibration based on high-definition map matching on public roadway

Liao Wenlong1,2, Zhao Huaqing2, Yan Junchi1(1.Shanghai Jiao Tong University, Shanghai 200240, China;2.Anhui COWAROBOT Co., Ltd., Wuhu 241010, China)

Abstract
Objective Camera calibration is one of the key factors of the perception in advanced driver-assistance systems (ADAS) and many other applications. Traditional camera calibration methods and even some state-of-the-art calibration algorithms, which are currently widely used in factories, strongly rely on specific scenes and specific markers. Existing methods to calibrate the extrinsic parameters of the camera are inconvenient and inaccurate, and current algorithms have some obvious disadvantages, which might cause serious accidents, damage the vehicle, or threaten the safety of passengers. Theoretically, once calibrated, the extrinsic parameters of the camera, including the position and the posture of camera installation, will be fixed and stable. However, the extrinsic parameters of a camera change throughout the lifetime of a vehicle. Real-time dynamic calibration is useful in cases when vehicles are transported or when cameras are removed for maintenance or replacement. Other extrinsic parameter calibration methods solves the estimation by simultaneous localization and mapping or visual inertia odometry (VIO) technologies. These methods try to extract point features and match points with the same characters, and the spatial transformation of different frames is calculated accordingly from the matched point pairs. However, according to the absence of texture information such as when one is in an indoor environment, the accuracy of extrinsic parameters is not always satisfactory. The common situation is that the algorithm cannot obtain any feature from the existing frames or the features that are obtained are not enough to calculate the position. To solve this problem and achieve the requirement of ADAS, this paper proposes a self-calibrating method that is based on aligning the detected lanes by the camera with a high-definition (HD) map. Method Feature extraction is the first step of calibration. The most common feature extraction method is to acquire features from frames, calculate the gradient or other specific information of every single pixel, and select the pixels with the most significant values as the detected features. In this paper, we introduce a state-of-the-art algorithm that uses deep learning to detect lane points in the images grabbed from the camera. Some parts of the extrinsic parameters, including longitudinal translation, are unobservable when the vehicle is moving; thus, a data filtering and post-processing method is proposed. Images are classified into three classes: invalid frame, data frame, and key frame. The data filtering rule will efficiently divide the obtained frames into these three types according to the information the frame carries. Then, in the next step, the reprojection error (or loss) is defined in the imaging plane. The process consists of four steps: 1) The lane detected in the HD map is projected to the image plane, and the nearest neighborhood is associated with every detected lane point. This step is similar to feature matching, but it focuses only on the distance of the nearest potential match points. 2) The distance of points and the normal vectorial angle is calculated, and different weights are assigned based on different image types. 3) The geometric constraints of lanes in the image plane and frame of the camera are solved. The initial guess of the extrinsic parameter is determined; the guess is often imprecise and valid only in the cases when the lane is a straight line and the camera translation is known. 4) A gradient descent-based iterative optimal method is used to minimize the reprojection error, and the optimal extrinsic parameter could be determined at the same time. We use such methods to calibrate the camera extrinsic for the vehicles because of several reasons and advantages. The extrinsic parameters are calibrated by using gradient descent because extrinsic parameters are hypothesized to change slowly enough during the lifetime of a vehicle. Therefore, optimizing the extrinsic parameters by using gradient descent could maintain the accuracy of the current extrinsic parameters. Even when outliers occur, the system could remain stable for a period of time rather than have rapidly changing extrinsic parameters, which is considered dangerous when the vehicle is in motion. Deep learning is used to calibrate the extrinsic parameters because the lanes look different according to different road conditions. With any current method, losing some features of the lane points is common. However, the deep learning method does not have such problems; with enough training data, lanes in any extreme case could be used, even in totally different environments or in most cases of extreme weather. Result Experiments on an open road show that the designed loss function is meaningful and convex. With 250 iterations, the proposed method can converge to the true extrinsic parameter, and its rotation accuracy is 0.2° and the translation accuracy is 0.03 m. Compared with the VIO-based and another lane detection-based method, our approach is more accurate with the HD map information. Another experiment shows that the proposed method can quickly converge to the new true value when extrinsic parameters change dynamically. Conclusion With the use of lane detection, the proposed method does not depend on specific scenarios or marks. Through the matching of the detected lane and the HD map with numerical optimization, the calibration can be performed in real time, and it improves the accuracy of extrinsic parameters more significantly than other methods. The accuracy of the proposed method meets the requirements of ADAS, showing great value in the field of industry.
Keywords

订阅号|日报