Print

发布时间: 2022-02-16
摘要点击次数:
全文下载次数:
DOI: 10.11834/jig.210547
2022 | Volume 27 | Number 2




    综述    




  <<上一篇 




  下一篇>> 





多源融合SLAM的现状与挑战
expand article info 王金科, 左星星, 赵祥瑞, 吕佳俊, 刘勇
浙江大学, 杭州 310027

摘要

同时定位与地图构建(simultaneous localization and mapping,SLAM)技术在过去几十年中取得了惊人的进步,并在现实生活中实现了大规模的应用。由于精度和鲁棒性的不足,以及场景的复杂性,使用单一传感器(如相机、激光雷达)的SLAM系统往往无法适应目标需求,故研究者们逐步探索并改进多源融合的SLAM解决方案。本文从3个层面回顾总结该领域的现有方法:1)多传感器融合(由两种及以上传感器组成的混合系统,如相机、激光雷达和惯性测量单元,可分为松耦合、紧耦合);2)多特征基元融合(点、线、面、其他高维几何特征等与直接法相结合);3)多维度信息融合(几何、语义、物理信息和深度神经网络的推理信息等相融合)。惯性测量单元和视觉、激光雷达的融合可以解决视觉里程计的漂移和尺度丢失问题,提高系统在非结构化或退化场景中的鲁棒性。此外,不同几何特征基元的融合,可以大大减少有效约束的程度,并可为自主导航任务提供更多的有用信息。另外,数据驱动下的基于深度学习的策略为SLAM系统开辟了新的道路。监督学习、无监督学习和混合监督学习等逐渐应用于SLAM系统的各个模块,如相对姿势估计、地图表示、闭环检测和后端优化等。学习方法与传统方法的结合将是提升SLAM系统性能的有效途径。本文分别对上述多源融合SLAM方法进行分析归纳,并指出其面临的挑战及未来发展方向。

关键词

同时定位与地图构建(SLAM); 多源融合; 多传感器融合; 多特征基元融合; 多维度信息融合

Review of multi-source fusion SLAM: current status and challenges
expand article info Wang Jinke, Zuo Xingxing, Zhao Xiangrui, Lyu Jiajun, Liu Yong
Zhejiang University, Hangzhou 310027, China
Supported by: National Natural Science Foundation of China (61836015)

Abstract

Simultaneous localization and mapping (SLAM) technology is widely used in mobile robot applications, and it focuses on the robot's motion state estimation issue and reconstructing the environment model (map) at the same time. The SLAM science community has promoted the technique to be deployed in various applications in real life nowadays, such as virtual reality, augmented reality, autonomous driving, service robots, etc. In complicated scenarios, SLAM systems empowered with single sensor such as a camera or light detection and ranging(LiDAR) often fail to customize the targeted applications due to the deficiency of accuracy and robustness. Current research analyses have gradually improved SLAM solutions based on multi-sensors, multiple feature primitives, and the integration of multi-dimensional information. This research reviews current methods in the multi-source fusion SLAM realm at three scales: multi-sensor fusion (hybrid system with two or more kinds of sensors such as camera, LiDAR and inertial measurement unit (IMU), and combination methods can be divided into two categories(the loosely-coupled and the tightly-coupled), multi-feature-primitive fusion (point, line, plane, other high-dimensional geometric features, and the featureless direct-based method) and multi-dimensional information fusion (geometric information, semantic information, physical information, and inferred information from deep neural networks). The challenges and future research of multi-source fusion SLAM has been predicted as well. Multi-source fusion systems can implement accurate and robust state estimation and mapping, which can meet the requirements in a wider variety of applications. For instance, the fusion of vision and inertial sensors can illustrate the drift and scale missing issue of visual odometry, while the fusion of LiDAR and inertial measurement unit can improve the system's robustness, especially in unstructured or degraded scenes. The fusion of other sensors, such as sonar, radar and GPS(global positioning system) can extend the applicability further. In addition, the fusion of diverse geometric feature primitives such as feature points, lines, curves, planes, curved surfaces, cubes, and featureless direct-based methods can greatly deduct the degree of valid constraints, which is of great importance for state estimation systems. The reconstructed environmental map with multiple feature primitives is informative in autonomous navigation tasks. Furthermore, data-driven deep-learning-based synthesized analysis in the context of probabilistic model-based methods paves a new path to overcome the challenges of the initial SLAM systems. The learning-based methods (supervised learning, unsupervised learning, and hybrid supervised learning) are gradually applied to various modules of the SLAM system, including relative pose regression, map representation, loop closure detection, and unrolled back-end optimization, etc. Learning-based methods will benefit the performance of SLAM more with more cutting-edge research to fill the gap amongst networks and various original methods. This demonstration is shown as following: 1) The analysis of funder mental mechanisms of multi-sensor fusion and current multi-sensor fusion methods are illustrated; 2) Multi-feature primitive fusion and multi-dimensional information fusion are demonstrated; 3) The current difficulties and challenges of multi-source fusion towards SLAM have been issued; 4) The executive summary has been implemented at the end.

Key words

simultaneous localization and mapping(SLAM); multi-source fusion; multi-sensor fusion; multi-feature fusion; multi-dimension information fusion

0 引言

同时定位与地图构建(simultaneous localization and mapping, SLAM)技术(Smith和Cheeseman,1986)经过几十年的发展,已取得丰硕的研究成果。它所关注的问题是载有传感器的机器人如何在未知环境中定位并构建出环境地图,是机器人估计自身状态和感知外部环境的关键技术。从一些综述文章(Durrant-Whyte和Bailey,2006Dissanayake等,2011Cadena等,2016)可知,SLAM的发展经历了3个阶段:第1阶段为早年的“经典”时期(Durrant-Whyte和Bailey,2006),主要完成了SLAM的概率解释,如基于扩展卡尔曼滤波器、粒子滤波器和极大似然估计的方法(Thrun等,2005);第2阶段为“算法分析”时期(Dissanayake等,2011),主要研究SLAM的基本性质,如收敛性、一致性、可观测性和稀疏性;第3阶段为当前的“鲁棒感知”时期(Cadena等,2016),主要解决复杂环境下的适应性问题。面对日趋复杂的应用场景,设计SLAM算法应统筹兼顾,有针对性地对算力、精度和鲁棒性等进行取舍(左星星,2021)。

在SLAM发展过程中,各种传感器承担着至关重要的角色,为定位与建图算法提供全局或局部的测量信息。全球定位系统(global positioning system, GPS)广泛用于户外环境中提供载体在全局坐标系下的位置测量(Burschka和Hager,2004Chen等,2020a)。然而,在许多环境中, 如高楼脚下、室内、隧道和海底等,GPS测量是不可靠或不可用的;惯性测量单元(inertial measurement unit,IMU)可感知机器人自身运动,虽然目前基于微机电系统的高性能、低成本和轻量化的IMU已经推向市场,但IMU测量由于存在零偏不确定性和累计误差(Huang,2019),无法长时间独立使用;相机在SLAM相关领域中的应用已经非常广泛,主要有单目相机(Davison,2003)、双目相机(Engel等,2015)和深度相机(Sturm等,2012)等。龙霄潇等人(2021)对视觉定位与地图构建进行了充分详细的调研。虽然已经有很多解决方案,但仅基于视觉的SLAM系统在动态环境、显著特征过多或过少以及存在部分或全部遮挡的条件下(如图 1所示)工作时会失败,且受天气、光照影响较大(Forster等,2017b);基于激光(Hess等,2016)构建场景是传统且可靠的方法,它能够提供机器人本体与周围环境障碍物间的距离信息且比较准确,误差模型简单,对光照不敏感,点云的处理比较容易且理论研究也相对成熟,落地产品更丰富,但其重定位能力较差,激光SLAM在跟踪丢失后很难重新回到工作状态,不擅长在相似的几何环境中工作(Levinson和Thrun,2010),如长直走廊,对动态变化较大的场景适应性也较弱。

图 1 具有挑战性的应用场景
Fig. 1 Challenging application scenarios
((a)seasonal environmentl(Olid et al., 2018); (b)occluded scene(Bescos et al., 2018); (c)environment with too few salient features)

基于上述实际SLAM应用中的挑战与难点,现有的技术方法试图从多种传感器、多种特征和多维度信息融合3个层次来进行改善和解决,因此本文中将此3类方法统称为多源融合SLAM方法,具体如下:

1) 针对用单传感器构建SLAM系统的局限性,研究者们首先提出将多种传感器组合,利用不同传感器的优势克服其他传感器的缺陷来提高定位建图算法在不同场景中的适用性和对位姿估计的准确性,涌现出视觉惯导(Qin等,2018)、激光惯导(Geneva等,2018)和激光视觉惯导(Shan等,2021)等多传感器融合系统;2)在对图像进行特征提取时,不仅考虑点特征,而且将线特征、平面特征和像素灰度信息等进行处理(Yang等,2019a, b),同时对激光雷达点云信息进行线特征点、面特征点和体素特征点的合理利用(Zhou和Tuzel,2018),从而达到多特征基元的融合;3)语义信息(Cadena等,2016)逐渐应用到SLAM中,它是一种长期稳定的特征信息,不易受环境因素影响,而传统SLAM方法主要使用环境的局部图像几何纹理信息,其容易受到光照、季节和天气等因素影响,故而语义和几何的多维度信息融合成为目前的研究热点之一(Rosinol等,2020)。

本文回顾分析目前多源融合SLAM方法的研究成果,从3个层次(如图 2所示)分别进行阐述。

图 2 现有多源融合SLAM分类与对应本文章节
Fig. 2 A taxonomy of existing works on multi-source fusion for simultaneous localization and mapping and corresponding chapter

1 多传感器融合

多传感器融合SLAM经过一定时间的发展,已经逐步形成了视觉惯性系统和激光惯性系统和激光视觉惯性系统等多种融合方式,本节基于优化问题定义多传感器融合系统并给出惯性传感器的基本动力学模型,对常用的传感器融合方式进行阐述。

1.1 多传感器融合问题与惯性传感器动力学模型

1.1.1 多传感器融合系统

用于估计机器人状态的多传感器融合系统可分为感知自身运动信息的本体传感器(如编码器、磁力计、轮速计和惯性测量单元等,其中惯性测量单元使用最广泛)和感知外部环境的传感器(如相机、激光雷达和毫米波雷达等)。对于单机器人多传感器状态估计问题,通过使用$n$个独立的传感器$S_{1}, S_{2}, …, S_{n}$估计机器人在$m$个离散时间的状态$X_{1}, X_{2}, …, X_{m}$。将$n$个传感器的测量输出表示为(van Dinh和Kim,2020)

$ \mathit{\boldsymbol{Z}} = \left[ {\begin{array}{*{20}{c}} {{{\mathit{\boldsymbol{\overline z}} }_1}}\\ {{{\mathit{\boldsymbol{\overline z}} }_2}}\\ \vdots \\ {{{\mathit{\boldsymbol{\overline z}} }_n}} \end{array}} \right] = {\left[ {\begin{array}{*{20}{c}} {{h_1}\left({{X_1}} \right)}&{{h_2}\left({{X_1}} \right)}& \cdots &{{h_n}\left({{X_1}} \right)}\\ {{h_1}\left({{X_2}} \right)}&{{h_2}\left({{X_2}} \right)}& \cdots &{{h_n}\left({{X_2}} \right)}\\ \vdots & \vdots &{}& \vdots \\ {{h_1}\left({{X_m}} \right)}&{{h_2}\left({{X_m}} \right)}& \cdots &{{h_n}\left({{X_m}} \right)} \end{array}} \right]^{\rm{T}}} $ (1)

式中,${\boldsymbol{{\bar {z}}}}_{i}=[h_{i}(X_{1}), h_{i}(X_{2}), …, h_{i}(X_{m})]$表示第$i$个传感器$m$时刻的测量模型,${\boldsymbol{Z}}$为所有传感器的整体测量,其公式可以定义为

$ \mathit{\boldsymbol{Z}} = \mathit{\boldsymbol{ \boldsymbol{\varPsi} }}\left({{X_1}, {X_2}, \cdots, {X_m}} \right) \in {{\bf{R}}^{n \times m}} $ (2)

式中,矩阵${\boldsymbol{\varPsi }}(·)∈{\bf{R}}^{n×m}$为测量模型,机器人的动力学模型为

$ \frac{{\partial X}}{{\partial t}} = \mathit{\Xi }(\mathit{\boldsymbol{X}}, \mathit{\boldsymbol{U}}, \mathit{\boldsymbol{W}}) $ (3)

式中,$\mathit{\Xi }(·)$为动力学模型的抽象表示,矩阵${\boldsymbol{X}}$包含机器人在整个运行过程中的状态,矩阵${\boldsymbol{U}}$表示控制输入,矩阵${\boldsymbol{W}}$表示噪声。多传感器融合系统定义为优化问题的最大似然估计,即

$ \begin{array}{*{20}{c}} {{\mathit{\boldsymbol{X}}^*} = \arg \mathop {\max }\limits_\mathit{\boldsymbol{X}} p(\mathit{\boldsymbol{X}}\mid \mathit{\boldsymbol{Z}}) = \arg \mathop {\max }\limits_\mathit{\boldsymbol{X}} p(\mathit{\boldsymbol{Z}}\mid \mathit{\boldsymbol{X}})p(\mathit{\boldsymbol{X}}) = }\\ {\arg \mathop {\min }\limits_\mathit{\boldsymbol{X}} \mathit{\boldsymbol{E}}\left({(\mathit{\boldsymbol{X}} - \mathit{\boldsymbol{\overline X}}){{(\mathit{\boldsymbol{X}} - \mathit{\boldsymbol{\overline X}})}^{\rm{T}}}} \right) = }\\ {\arg \mathop {\min }\limits_\mathit{\boldsymbol{X}} \left\| {\mathit{\boldsymbol{X}} - \mathit{\boldsymbol{\overline X}} } \right\|_\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}^2\;\;\;{\kern 1pt} {\rm{ s}}{\rm{. t}}{\rm{. }}\;\;\;{\kern 1pt} \mathit{\boldsymbol{Z}} = \mathit{\boldsymbol{ \boldsymbol{\varPsi} }}\left({{X_1}, {X_2}, \cdots, {X_m}} \right)} \end{array} $ (4)

式中,${\boldsymbol{X}}^*$为式(3)的优化结果,${\boldsymbol{{\bar {X}}}}$是式(2)中的估计值,$E$ 是协方差矩阵为${\boldsymbol{\varOmega }}$的期望。

1.1.2 惯性传感器动力学模型

典型的惯性测量单元提供机体坐标系${I}$下的三轴线性加速度${\boldsymbol{a}}_{m}$和三轴角速度${\boldsymbol{ω}}_{m}$,测量输出为(Trawny和Roumeliotis,2005)

$ \begin{array}{*{20}{c}} {{\mathit{\boldsymbol{a}}_m} = _G^I\mathit{\boldsymbol{R}}\left({^G\mathit{\boldsymbol{a}}{ - ^G}\mathit{\boldsymbol{g}}} \right) + {\mathit{\boldsymbol{b}}_a} + {\mathit{\boldsymbol{n}}_a}}\\ {{\mathit{\boldsymbol{\omega }}_m}{ = ^I}\mathit{\boldsymbol{\omega }} + {\mathit{\boldsymbol{b}}_g} + {\mathit{\boldsymbol{n}}_g}} \end{array} $ (5)

式中,${\boldsymbol{b}}_{a}, {\boldsymbol{b}}_{g}$分别为加速度计和陀螺仪的偏置,${\boldsymbol{n}}_{a}, {\boldsymbol{n}}_{g}$分别为加速度计和陀螺仪的高斯白噪声,$^I{\boldsymbol{ω}}=[ω_{1}\;ω_{2}\;ω_{3}]^{\rm{T}}$是惯性测量的单元的旋转速度,$^{G}{\boldsymbol{a}}$是全局坐标系$\{G\}$下的惯性测量单元的加速度,$^{G}{\boldsymbol{g}}$是全局坐标系下的重力加速度,由式(4)可得惯性测量单元的连续时间运动学模型为(Chatfield,1997)

$ \begin{array}{*{20}{c}} {_G^I\boldsymbol{\dot q} = \frac{1}{2}\left[ {\begin{array}{*{20}{c}} 1\\ 0 \end{array}} \right] \otimes _G^I\boldsymbol{q} = \frac{1}{2}\boldsymbol{\varOmega }\left( {^I\boldsymbol{\omega }} \right)\;_G^I\boldsymbol{q}}\\ {^G\boldsymbol{\dot p}{ = ^G}\boldsymbol{v}{,^G}\boldsymbol{\dot v}{ = ^G}\boldsymbol{a}}\\ {{{\boldsymbol{\dot b}}_a} = {\boldsymbol{n}_{wa}},{{\boldsymbol{\dot b}}_g} = {\boldsymbol{n}_{wg}}} \end{array} $ (6)

式中,$\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}(\mathit{\boldsymbol{\omega }}) = \left({\begin{array}{*{20}{c}} { - \left\lfloor {{\mathit{\boldsymbol{\omega }}_ \times }} \right\rfloor }&{^I\mathit{\boldsymbol{\omega }}}\\ { - {\mathit{\boldsymbol{\omega }}^{\rm{T}}}}&0 \end{array}} \right)$$\left\lfloor {{\mathit{\boldsymbol{\omega }}_ \times }} \right\rfloor $${\boldsymbol{ω}}$的反对称矩阵。惯性测量单元的状态向量${\boldsymbol{X}}_{I}=[^{I}_{G}{\boldsymbol{q}}^{\rm{T}}\;{\boldsymbol{b}}^{\rm{T}}_{g}\;^{G}{\boldsymbol{v}}{\boldsymbol{b}}^{\rm{T}}_{a}\;^{G}{\boldsymbol{p}}^{\rm{T}}]$$^{I}_{G}{\boldsymbol{q}}$为旋转矩阵四元数形式,$^{G}{\boldsymbol{v}}$为速度向量,$^{G}{\boldsymbol{p}}$为位置向量;连续时间状态误差传播方程为(Mourikis等,2009)

$ {\mathit{\boldsymbol{\dot {\tilde X}}}_I} = \mathit{\boldsymbol{F}}{\mathit{\boldsymbol{\widetilde X}}_I} + \mathit{\boldsymbol{G}}{n_I} $ (7)

式中,${\boldsymbol{\tilde X}}$为状态误差,${\boldsymbol{F}}$为状态误差转移矩阵,${\boldsymbol{G}}$为输入噪声矩阵(Trawny和Roumeliotis,2005)。

1.2 视觉惯性系统

Titterton和Weston(2005)指出,视觉惯性系统的核心是如何进行更好的状态估计,如何最佳地将IMU测量值和相机图像信息进行融合,为传感器安装平台提供最优的自身运动信息与环境信息(Huang,2019),典型的视觉惯性系统如图 3所示。IMU和相机有两种融合方式:松耦合和紧耦合。松耦合把IMU测量信息和相机图像信息当做两个相对独立的模块分别进行处理,然后再把二者的估计结果一起进行融合或优化,可能会导致精度缺失;而紧耦合直接把相机的图像信息和IMU测量信息提供的约束放在一个估计器或优化器中进行求解,一般来说紧耦合精度更高,但计算量也更大。

图 3 典型视觉惯性SLAM系统框架(Cadena等,2016; Forster等,2017a)
Fig. 3 A typical framework of visual-inertial SLAM (Cadena et al., 2016; Forster et al., 2017a)

在扩展卡尔曼滤波器(extended Kalman filter,EKF)的框架下,Konolige等人(2010)实现了相机与IMU的松耦合,状态预测步骤用视觉里程计得到的旋转量来完成,状态更新步骤则用IMU提供的位姿测量来实现。Tardif等人(2010)考虑更多的约束,进一步提高了模型的复杂度,将IMU与视觉里程计历史中多个时刻的状态加入状态向量中进行预测更新,利用了一个延迟的卡尔曼滤波器框架,但该模型复杂度较高。Weiss和Siegwart(2011)引入了一个黑箱模型来表示尺度不确定的视觉里程计,将相机位姿估计的输出作为测量值,与带有尺度信息的IMU位姿估计使用EKF框架进行松耦合。以EKF为滤波框架,Lynen等人(2013)又在EKF滤波框架中提出融合视觉里程计的相对位姿和IMU测量。

多状态约束卡尔曼滤波器(multi-state constraint Kalman filter, MSCKF)(Mourikis和Roumeliotis,2007)是基于EKF提出的一种紧耦合框架,使用IMU进行状态预测,将当前时刻的IMU速度、测量偏差等状态和滑动窗口中的多时刻相机位姿一并放到状态向量中,对视觉惯性里程计进行6自由度的运动估计。并提出了针对视觉特征的零空间操作,视觉特征点不再作为被估计的状态量。Leutenegger等人(2015)提出的OKVIS(open keyframe-based visual-inertial SLAM)虽然也用滑动窗口的方法来构建视觉惯性系统,但用的是因子图优化的方式求解相机与IMU的约束,而且会及时边缘化旧关键帧与特征点来保证Hessian矩阵的稀疏性,从而达到限制问题求解规模的效果。Qin等人(2018)提出的VINS-MONO(monocular visual-inertial system)使用关键帧图像间IMU的预积分约束(Forster等,2017a)和视觉特征点的观测约束构建因子图优化问题,并在后端优化中加入闭环约束,在纠正视觉惯性里程计漂移方面取得较好的效果。Liu等人(2018)使用相关边缘化提出的ICE-BA(incremental, consistent and efficient bundle adjustment),具有更高的全局一致性与计算效率。

此外,基于深度学习的视觉惯性系统也逐步发展起来。Liu等人(2019)提出InertialNet,训练端到端模型来推导图像序列和IMU信息之间的联系,预测相机旋转。Shamwell等人(2020)提出无需IMU内参或传感器外参的无监督深度神经网络方法,通过在线纠错模块解决定位问题,这些模块经过训练可以纠正视觉辅助定位网络的错误。Kim等人(2021)为不确定性建模引入了无监督的损失,在不需要真值协方差作为标签的情况下学习不确定性,通过平衡不同传感器模式之间的不确定性,克服学习单个传感器不确定性的局限性,在视觉和惯性退化的场景中进行了验证。

1.3 激光惯性系统

激光雷达(light deteation and ranging, LiDAR)与IMU也是常用的组合方式,也可分为松耦合与紧耦合两种融合方式。

基于松耦合的激光惯性里程计,在LOAM(lidar odometry and mapping)(Zhang和Singh,2014)推出后,追随者越来越多。LOAM定义了逐帧跟踪的边缘与平面3D特征点,使用高频率的IMU测量对两帧激光雷达之间的运动进行插值,该运动作为先验信息用于特征间的精准匹配,从而实现高精度里程计。Shan和Englot(2018)在LOAM的基础上提出LeGO-LOAM(lightweight and ground-optimized lidar odometry and mapping),通过对地平面的优化估计,提高用在地面车辆上的LOAM的实时性能。然而,当面临无结构环境或退化场景(Zhang等,2016),这些算法性能将会大大降低甚至失效,因为在长高速公路、隧道或空旷空间等场景中,激光雷达的作用距离有限,无法找到有效约束。

LIPS(lidar-inertial plane SLAM)(Geneva等,2018)是激光雷达与IMU紧耦合的早期工作之一,它是一种基于图优化的框架,最小化平面特征之间的距离和IMU残差项,提出基于最近点的平面表示方法,优化3D平面因子与IMU预积分。Ye等人(2019)在LOAM的基础上,引入IMU的预积分测量量,提出在快速移动场景中,比LOAM性能更好的LIOM(lidar inertial odometry and mapping)。同样基于LOAM框架,Shan等人(2020)通过引入局部扫描匹配提出的LIO-SAM(lidar inertial odometry via smoothing and mapping),其系统结构如图 4所示。使用IMU预积分对激光雷达点云做运动补偿并为激光点云的配准提供初值,此外,该系统还可以加入闭环与GPS信息来消除漂移,从而实现长时间导航。在退化场景中,由于缺乏有效观测,紧耦合的激光惯性系统同样很难适应。

图 4 LIO-SAM紧耦合激光惯性系统结构(Shan等,2020)
Fig. 4 Tightly coupled lidar-inertial system structure of LIO-SAM(Shan et al., 2020)

1.4 激光视觉惯性系统

为了使SLAM算法在光照条件较差或结构退化的场景中都能有效工作,将激光雷达、相机和IMU三者进行融合是个很好的方案。本文也从松耦合与紧耦合两个方面对该系统进行回顾。

近年涌现出将视觉与激光雷达和IMU松耦合的解决方案,几种传感器优势互补,既能够适应退化场景又兼具激光雷达惯性系统的高精度平滑轨迹。Zhang等人(2014)提出DEMO(depth enhanced monocular odometry),使用激光雷达的点云深度值为视觉特征点提供深度信息,可以提供更高精度的位姿估计和更高质量的地图。Zhang和Singh(2015)又基于LOAM算法,集成单目特征跟踪与IMU测量来为激光雷达扫描匹配提供距离先验信息,提出了V-LOAM(visual-lidar odometry and mapping),然而算法执行过程是逐帧进行的,缺乏全局一致性。针对这一问题,Wang等人(2019)通过维护关键帧数据库来进行全局位姿图优化,从而提升全局一致性。为了克服退化问题,Khattak等人(2020)提出另外一种类似LOAM的松耦合方法,它使用视觉惯性先验进行激光雷达扫描匹配,可以在无光照的隧道中运行。Camurri等人(2020)提出用于腿足机器人的Pronto,用视觉惯性里程计为激光雷达里程计提供运动先验信息,并能校正视觉与激光之间的位姿。

为了提高SLAM系统的鲁棒性,研究者们对激光雷达、视觉与IMU的紧耦合方式进行了探索。Graeter等人(2018)提出了一种基于集束调整(bundle adjustment, BA)的视觉里程计系统LIMO(lidar-monocular visual odometry),该算法将激光雷达测量的深度信息重投影到图像空间,并将其与视觉特征相关联,从而保持准确的尺度信息。Shao等人(2019)提出的VIL-SLAM(visual inertial lidar SLAM),直接对3种传感器信息进行联合优化,将视觉惯性里程计与激光里程计相结合作为单独的子系统,用来组合不同的传感器模式。许多学者基于MSCKF对三者执行联合状态优化,Zuo等人(2019a)的LIC(lidar-inertial-camera)-Fusion也用MSCKF框架对激光雷达边缘特征、IMU测量和稀疏视觉特征进行紧耦合操作。在其后续工作LIC-Fusion2.0中(Zuo等,2020),引入了基于滑动窗口的平面特征跟踪方法来处理激光雷达的3D点云。Shan等人(2021)提出的LVI-SAM(lidar-visual-inertial smoothing and mapping),由视觉惯导子系统与激光惯导子系统组成,基于因子图优化,可以实现鲁棒高精度的状态估计与建图。Wisth等人(2021)提出的紧耦合激光视觉惯性系统(VILENS)(框架如图 5所示),则用一个因子图优化框架来联合优化3种传感器,直接提取激光雷达点云中的线面特征,达到实时处理激光雷达数据的目的。

图 5 紧耦合激光视觉惯性系统结构(Wisth等,2021)
Fig. 5 Overview of the VILENS system architecture(Wisth et al., 2021)

1.5 其他传感器融合系统

除了上述的相机、IMU和激光雷达的传感器组合融合方式,还有许多其他的传感器广泛应用于SLAM中。Khan等人(2018)使用卡尔曼滤波器将超声波距离传感器、IMU和轮速计进行融合,实现定位与栅格地图构建。Zhu等人(2019)针对大视野应用场景,使用全景相机实现高精度的相机位姿估计和3D稀疏特征地图的构建。Jang和Kim(2019)针对未知水下环境,融合单波束声学高度计、多普勒测速仪(Doppler velocity log, DVL)或IMU,实现基于面板的测深SLAM。Almalioglu等人(2021)使用无迹卡尔曼滤波(unscented Kalman filter,UKF)框架融合IMU和毫米波雷达信息,完成室内的低成本位姿估计。Zou等人(2020)使用WiFi、激光雷达和相机等传感器,可以在易于访问的自由空间中同时构建密集WiFi无线电地图和空间地图,支持高斯过程回归条件最小二乘生成对抗网络,实现移动机器人平台在复杂室内的高精度定位。Zhou等人(2021)提出一种UWM(ultra-wideband)坐标匹配方法,通过UWB SLAM和LiDAR SLAM中的采样点对,使用轨迹匹配算法得到两个坐标系之间的变换关系。

多传感器融合系统的简要对比如表 1所示,随着新的传感器应用,会出现更多的传感器组合方式,相信未来SLAM的适应性、鲁棒性和性能精度能够进一步得到提升。

表 1 多传感器融合算法简要对比
Table 1 Brief comparison of multi-sensor fusion algorithms

下载CSV
融合方式 模型 性能与贡献
视觉+IMU 松耦合 Konolige等人(2010) 越野地形中跟踪10 km运动,误差小于10 m(0.1%)
Tardif等人(2010) 郊外跟踪2 503 m,最大速度33 km/h,误差为0.05%
Weiss等人(2011) 实现单目尺度估计,与真实数据相比略低于2%
Lynen等人(2013) 提出MSF-EKF,理论上可处理无限数量的传感器测量
紧耦合 MSCEKF 推导测量模型,表达静态特征的多视图几何约束,状态量无需3D特征位置
OKVIS 制定概率代价函数,将惯性测量量与图像关键点作为非线性优化问题处理
VINS-MONO 单目视觉惯性估计器,具有IMU预积分、初始化和在线标定等解决方案
Ke等人(2019) 基于Schmidt-Kalman,平均定位精度与闭环检测分别可达到14.8 cm和9.3 cm
激光雷达+IMU 松耦合 LOAM 将SLAM分为里程计估计与点云配准两种算法,58 m走廊精度可达到0.9%
LeGO-LOAM 基于LOAM,加入地平面优化,平移与旋转精度相对LOAM提升2~10倍
紧耦合 LIPS 利用激光雷达最近点平面表示的无奇点平面因子,融合惯性预积分测量
LIOM 提出旋转约束细化算法,将激光雷达位姿与全局地图对齐
LIO-SAM 提出一种滑动窗口方法处理激光雷达帧,可融合GPS、指南针和高度计
激光雷达+
视觉+IMU
松耦合 DEMO 激光雷达点云深度值为视觉特征提供深度信息,提高视觉里程计性能
V-LOAM 耦合视觉里程计与激光雷达里程计,相对位置漂移达到0.75%
Wang等人(2019) 模块出现故障时,其余模块自动进行运动跟踪,实现高鲁棒高精度
Pronto 可适应各种复杂环境,弱光条件、运动模糊、反射、动态运动和崎岖地形
紧耦合 LIMO 从LiDAR测量中获取相机特征轨迹的深度提取算法,进入KITTI榜前15名
VIL-SLAM 针对走廊、隧道退化场景,实时生成6自由度位姿,1 cm体素稠密地图
LIC-Fusion 异步传感器在线时空校准,利用稀疏视觉特征与激光点云特征应对剧烈运动
LIC-Fusion2.0 引入滑动窗口平面特征跟踪,提出新异常值拒绝标准,分析系统可观测性
LVI-SAM 由视觉惯性系统与激光雷达惯性系统构成,二者互相促进优化,各取所需
其他传感器 Khan等人(2018) 融合超声波距离传感器、IMU和轮速计,完成定位与栅格地图构建
Zhu等人(2019) 全景相机实现大视野高精度的相机位姿估计与3D稀疏地图构建
Jang和Kim(2019) 融合声学高度计、IMU和DVL,实现基于双线性面板的未知水下测深SLAM
Zou等人(2020) 融合WiFi、激光雷达和相机,构建WiFi无线电地图,实现室内高精度定位

2 多特征基元融合

针对复杂环境,基于单特征基元的SLAM在精度和鲁棒性方面都有所欠缺,容易受到光照、运动和纹理等因素的影响,不确定性较高。而对于相机图像,不仅可以从中提取特征点,还可以获得线段特征、平面特征和像素灰度等信息特征;对于激光雷达点云,可以从中提取线特征点、面特征点和有正态分布特性的体素特征(voxel)。将对多特征基元之间的融合进行介绍,多特征基元融合系统的简单对比如表 2所示。

2.1 特征点法与直接法

在视觉SLAM系统中,特征点法通过提取和匹配相邻图像(关键)帧的特征点估计对应的帧间相机运动,包括特征检测、匹配、运动估计和优化等步骤(邹雄等,2020)。最具代表性的工作为牛津大学提出的PTAM(parallel tracking and mapping)(Klein和Murray,2007),它开创性地将相机跟踪和建图分为两个并行的线程。基于关键帧的技术已经成为视觉SLAM和视觉里程计的黄金法则,在同等算力的情况下,它比滤波方法更加精确(Strasdat等,2012)。Strasdat等人(2011)使用双窗口优化和共视图实现了大场景的单目视觉SLAM。基于前人的工作,Mur-Artal等人(2015)提出的ORB-SLAM,Mur-Artal和Tardós(2017)提出的ORB-SLAM2(2017)以及Campos等人(2021)提出的ORB-SLAM3使用ORB(oriented fast and rotated brief)特征,这种描述子可以提供较长时间内的数据关联。ORB-SLAM系列算法使用DBoW2(bags of binary words)(Gálvez-López和Tardós,2012)实现回环检测和重定位,ORB-SLAM3甚至可以适配单目、双目、RGB-D、针孔和鱼眼等相机模型。基于特征的方法比较依赖检测和匹配阈值,还需要稳健的技术去处理错误匹配,且计算量较大。

表 2 多特征基元融合算法简要对比
Table 2 Briof comparison of multi-feature primitive fusion algorithms

下载CSV
融合方式 模型 性能与贡献
SVO 基于图像强度进行运动跟踪估计,基于特征的方法进行建图,提高鲁棒性
特征点法
与直接法
PL-SVO 将SVO扩展到单目视觉里程计,引入线段跟踪处理,适应弱纹理结构化场景
SVL 关键帧中提取匹配ORB特征用于优化闭环,直接法跟踪非关键帧提升速度
融合多种
几何特征
点、线 Marzorati等人(2007) 通过不确定投影几何,组合估计2D和3D点、2D和3D线以及3D平面
Zhang等人(2015) 直线作为特征的基于图形的视觉SLAM,使用不同的表示方法参数化3D线
PL-SLAM 以ORB-SLAM为基础,同时处理点和线特征,适应纹理较低的环境
Zuo等人(2019a) 将正交表示作为最小参数化,建模图像的点、线特征,改善视觉SLAM系统
线、面 PLADE 使用平面/线特征及其相互关系进行配准,在数据获取中提供更大自由度
Geneva等人(2018) 利用激光雷达最近点平面表示的无奇点平面因子,融合惯性预积分测量
Deschaud(2018) 使用隐式移动最小二乘法表面表示LiDAR扫描,无闭环可达到4 km漂移0.4%

与特征点法不同的是,直接法不用提取图像特征,而是直接使用像素强度信息,通过最小化光度误差来实现运动估计。Newcombe等人(2011b)的DTAM(dense tracking and mapping)首先使用直接法实现了单目视觉SLAM,它提取每个像素的逆深度并通过优化的方法构建深度图,进而完成相机位姿估计。LSD-SLAM(large-scale direct SLAM)(Engel等,2014)针对大规模场景,能够使用单目相机获得全局一致的半稠密地图。DSO(direct sparse odometry)(Engel等,2018)对整个图像中的像素进行均匀采样,考虑了光度校准、曝光时间、镜头渐晕(lens vignetting)和非线性响应函数。LDSO(loop-closure DSO)(Gao等,2018)在DSO的基础上增加闭环,保证长时间的跟踪精度。Stereo DSO(Wang等,2017a)则用多视图几何来估计深度值。虽然直接法相较于特征点法省去了特征点和描述子的计算时间,只利用像素梯度就可构建半稠密甚至稠密地图,但是由于图像的非凸性,完全依靠梯度搜索不利于求得最优值,而且灰度不变是一个非常强的假设,单个像素又没有什么区分度(高翔和张涛,2019),所以直接法在选点较少时无法体现出其优势。

利用特征法和直接法的优点,Forster等人(2014)在多旋翼飞行器上实现了半直接单目视觉里程计(semi-direct visual odometry, SVO)。该系统工作流程如图 6所示,使用像素间的光度误差,通过基于稀疏模型的图像对齐进行位姿初始化;通过最小化特征快匹配的重投影误差优化位姿和地图点,可以更快更准地得到状态估计结果。他们还将这种组合方式推广到了多相机系统中(Forster等,2017b)。针对SVO对姿态初始值过度依赖的问题,Gomez-Ojeda等人(2016)提出PL-SVO(SVO by combining points and line segments),用点和线段特征对SVO进行改进。Lee和Civera(2019)则将ORB-SLAM与DSO进行松耦合来提升系统的定位精度,该方法的前后端几乎是独立的,限制了性能的进一步提高。Kim等人(2019)提出一种双目测距的半直接方法,其中运动估计用特征法来获得,相机姿态则用直接法来优化。SVL(semi-direct visual with loopclosuer)在关键帧中提取ORB特征,然后用直接法跟踪非关键帧中的这些特征,从而实现一个快速准确的半直接SLAM系统(Li等,2019b)。

图 6 特征点法与直接法结合的跟踪与建图流程(Forster等,2014, 2017b)
Fig. 6 Tracking and mapping pipeline combining feature-based method and direct method (Forster et al., 2014, 2017b)

2.2 融合多种几何特征

点特征在视觉SLAM和激光SLAM中得到研究者们普遍使用,然而在走廊、礼堂和地下车库等环境中,由于无法有效提取足够的特征点,基于点特征的方法不再适用,但是上述环境中的直线、曲线、平面、曲面和立方体等多维几何特征却十分丰富,这为研究者们解决问题提供了思路。

许多学者尝试使用环境中的点、直线(线段)和面特征来辅助视觉SLAM系统进行状态估计。Marzorati等人(2007)通过不确定投影几何将3D点和线集成到6自由度视觉SLAM中,在构成的框架内,可以描述、组合和估计诸如2D点和3D点、2D和3D线以及3D平面等各种类型的几何元素,以此来提高建图和位姿估计精度。Zhang等人(2015)提出一种基于图像直线特征的视觉SLAM系统,使用双目相机进行运动估计、位姿优化和BA,并用不同的表示方式来参数化3D线,在线特征比较丰富的环境中,性能要优于基于点特征的方法。Gomez-Ojeda和Gonzalez-Jimenez(2016)用基于概率的方法,将点特征与线特征进行组合,通过最小化点和线段特征的投影误差来恢复相机运动,该系统在低纹理场景中也能有效地工作。PL-SLAM(points and lines SLAM)(Pumarola等,2017)则以ORB-SLAM为基础,同时处理点特征和线特征,其系统框架如图 7所示,用3幅连续图像帧中的5条线段来估计相机位姿并构建3D地图。同样基于ORB-SLAM,Zuo等人(2017)采用正交表示作为最小参数化,建模视觉SLAM中的点特征和线特征,并推导出重投影误差的关于线特征参数的雅可比矩阵,并在仿真和实际场景中取得了较好的实验效果。Yang等人(2019a)基于图像的线段测量,提出滑动窗口的3D线三角化算法,并揭示了导致三角化失败的3种退化运动,为其提供几何解释。Arndt等人(2020)将平面地标和平面约束添加到基于特征的单目SLAM中,它不依赖深度信息或深度神经网络就可实现更完整更高级别的场景表示。除此之外,共线(Zhou等,2015)、共面(Li等,2020)和平行(Li等,2020a)等关系也可为几何特征提供正则化,进一步提高估计器的精度。甚至更多维的几何特征,如曲线(Meier等,2018)、曲面(Nicholson等,2019)和立方体(Yang和Scherer,2019)都可用来提升SLAM系统的性能。

图 7 双目PL-SLAM系统框架(Pumarola等,2017)
Fig. 7 Scheme of the stereo PL-SLAM system(Pumarola et al., 2017)

激光雷达的激光点测量信息可直接用点云匹配算法进行位姿估计(Konecny等,2016)。近年来激光SLAM也开始利用线特征(Wu等,2018)、面特征(Berger,2013)等多维几何特征提供有效约束,进行更高精度的点云配准和位姿估计。PLADE(plane-based descriptor)(Chen等,2020c)用平面特征和线特征以及二者之间的相互关系来进行配准,并且可以配准重叠较小的点云,使数据获取更加自由。激光雷达数据中,与直接使用点特征相比,线和面特征之间的数据关联有时更为简单,比如仅有3D位置的两个激光点很难进行有效区分和准确的数据关联。现有的激光雷达数据关联通常需要迭代地寻找最近的几何特征(Deschaud,2018),这种数据关联方式效率较低,容易出错,会降低SLAM的精度并导致状态估计不一致。

3 多维度信息融合

传统SLAM系统发展几十年至今,理论方面逐渐趋于成熟。其中的闭环检测通常依赖于从传感器原始数据中提取的几何基元特征,如特征点、线和面等,通过对几何特征编码的场景特征向量进行匹配实现闭环检测。但在恶劣环境下,几何特征提取极不稳定,难以保证准确的闭环检测。而语义信息是一种长期稳定的特征,不易受环境因素的影响,但只用语义信息无法实现精确定位。因此,可以尝试将语义信息融合到传统SLAM系统中,构建长期稳定的定位系统。近年来,基于数据驱动的深度学习的方法逐渐兴起,通过对大量数据的学习可以得到比手工设计更加精确的模型,故而传统方法与学习方法有效融合可以提升定位系统的精度和鲁棒性。此外,物理信息辅助位姿估计也是许多研究者感兴趣的方法之一,通过对特定物理信息分析建模,可为状态估计提供有效约束。表 3对当前的多维度信息融合系统进行了对比。

表 3 多维度信息融合算法简要对比
Table 3 Brief comparison of multi-dimensional information fusion algorithms

下载CSV
融合方式 模型 性能与贡献
SLAM++ 利用对象和结构先验,完成实时3D对象识别跟踪表面重建,开辟几何语义建图系统
几何信息
与语义信息
SemanticFusion 多视点语义预测概率性地融合到地图中,结合卷积神经网络完成3D语义地图构建
Kimera 实时度量语义SLAM系统,支持3D中的网格重建和语义标记,可在CPU实时运行
学习方法
与传统方法
里程计 DeepVO 基于监督学习的深度循环卷积神经网络完成单目端到端里程计,直接由图象获取位姿
IONet 深度循环神经网络完成惯性传感器端到端地位姿学习,克服偏差噪声累计误差影响
建图 Eigen等,2014 用带有深度标签的图像数据集训练深度神经网络预测像素深度,提高准确性
SurfaceNet 提出一种多视图立体视觉的网络表面学习框架,直接学习表面结构的几何关系
CodeSLAM 以单个图像的强度数据为条件,对场景进行隐式编码构成稠密场景地图
CodeVIO 紧耦合深度神经网络与VIO,在EKF框架中使用稀疏测量高效更新稠密深度图
定位 PoseNet 隐式基于地图的定位,训练卷积神经网络,从单目图像中估计位姿,完成相机定位
NN-Net 2D-2D显式基于地图的定位,直接从成对图像中回归得到相机的位姿,完成相机定位
HF-Net 2D-3D基于描述子匹配,恢复2D图像在3D场景模型中的相机位姿,完成相机定位
其他 Tang和Tan(2018) 以特征度量误差显示强制执行多视图几何约束,完成相机与场景局部一致性优化
Sheng等人(2019) 解决关键帧检测和视觉里程计的联合学习问题,可靠地检测关键帧并定位新帧
Zhou等人(2020) 基于密集关键帧的相机跟踪和深度图估计,6自由度跟踪性能达到RGB-D等级
融合
物理信息
Britcher和Bergbreiter(2021) 将气压差作为小型四旋翼的测量输入,扩展现有的地面效应推理模型,增强对相关物体的检测
Nisar等人(2019) 结合机器人运动力学和预积分残差中的外部推进力,解决模型与实际运动间的差异
Zuo等人(2021) 融合环境物理信息的运动流形和机器人物理信息的运动学参数,提升位姿估计精度

3.1 几何信息与语义信息

几何信息在机器人定位导航领域至关重要,语义信息不仅可以辅助构建稳健的SLAM系统,还可以为机器人提供抽象模型,便于其理解和执行人类的指令。早期的几何重建如运动恢复结构(structure from motion,SfM)(Enqvist等,2011)、多视图立体几何(Schops等,2017)和语义分割(Garcia-Garcia等,2017)的研究基本上是独立开展的。近年来,研究者们对二者交叉的研究和应用兴趣浓厚,并产生了许多优秀的成果(Cadena等,2016)。

早期的几何语义理解由于计算量问题无法达到实时,只能离线运行(Bao和Savarese,2011Brostow等,2008)。SLAM++(Salas-Moreno,2013)是一个实时增量的SLAM系统,可以高效地对场景进行语义描述,非常适合由重复相同的结构和特定领域的物体组成的公共建筑内部环境,能够完成3D对象的实时识别跟踪,并提供6自由度相机对象约束。得益于这项开创性工作,涌现出了大批实时几何语义建图系统。SemanticFusion(McCormac等,2017)将卷积神经网络和ElasticFusion(Whelan等,2015)相结合,用RGB-D相机帧间的像素级匹配把每帧的2D分割融合为一个连贯的3D语义地图,可以在25 Hz帧率下进行实时交互。Zheng等人(2019)基于语义分割的在线RGBD重建,提出一种未知环境下的机器人主动场景理解方法,使用在线估计的视角分数场(viewing score field,VSF)和截断符号距离函数(truncated signed distance function)联合优化路径和相机位姿。Tateno等人(2015)Li等人(2016)使用概率推理的方法,结合对象位姿估计和SLAM场景理解与语义分割,构建了一种在线增量场景建模框架,提高了语义分割和6自由度对象位姿估计性能。其他诸如Fusion++(McCormac等,2018)、Mask-Fusion(Runz等,2018)、Co-Fusion(Rünz和Agapito,2017)和MID-Fusion(multi-instance dynamic fusion)(Xu等,2019a)等工作用的传感器大部分是RGB-D相机,基于体素、面元或物体表示,并使用GPU加速来实现跟踪或建图。其他基于CPU的解决方案,如在室内场景中可实时运行在移动设备上的方法(Wald等,2018),Voxblox(Grinvald等,2019),PanopticFusion(Narita等,2019),传感器用的也是RGB-D相机。也有其他的一些解决方案使用激光雷达,如SemanticKitti(Behley等,2019)和SegMap(Dubé等,2018);单目相机,如CNN-SLAM(Tateno等,2017),VSO(visual semantic odometry)(Lianos等,2018),XIVO(Dong等,2017)等。Kimera(Rosinol等,2020, 系统结构如图 8所示)将视觉惯性SLAM系统、网格重建和语义理解相结合,提供一个快速、轻量级并且可以扩展的基于CPU的解决方案,可以在室内室外等场景中良好运行。

图 8 Kimera结构示意图(Rosinol等,2020)
Fig. 8 Kimera's schematic(Rosinol et al., 2020)

3.2 学习方法与传统方法

传统SLAM方法发展至今,理论已趋于成熟,并在各种数据集上达到了不错的效果。但实际应用场景复杂度一般要高于数据集,某些基于物理模型或几何理论的假设与实际情况不相符。近年来基于深度学习的方法逐渐兴起,可以通过数据驱动的方式学习得到比手工设计更加精确的模型,从而提升SLAM系统的性能。从里程计估计、建图和全局定位到同时定位与建图(Chen等,2020b),深度学习方法已经在SLAM系统的方方面面得到应用。

里程计用两帧或多帧传感器数据来估计载体的相对位姿变化,以初始状态为基础推算出全局姿态,其核心问题是如何从各种传感器测量中准确地估计出平移和旋转变换。当前的深度学习方法在视觉里程计、惯性里程计、视觉惯性里程计和激光里程计等应用中已经实现端到端的方案。典型的基于学习的视觉里程计结构如图 9所示。基于监督学习的DeepVO(Wang等,2017b)使用卷积神经网络和递归神经网络的组合方式实现视觉里程计的端到端学习, 卷积神经网络完成成对图像的视觉特征提取,递归神经网络则用来传递特征并对其时间相关性建模。基于无监督学习的SfmLearner(Zhou等,2017)由一个深度网络和一个位姿网络构成,深度网络用来预测图像的深度图,位姿网络用来学习图像之间的运动变换。单目视觉里程计D3VO(deep depth, deep pose and deep uncertainty visual odometry)(Yang等,2020)在深度、位姿和不确定性估计3个层次上使用深度网络,在仅使用一个相机的情况下与当时性能最好的视觉惯性里程计不相上下。Chen等人(2018)提出IONet(inertial odometry networks)用于从惯性测量序列中端到端学习相对位姿,这种纯惯性方案可以应用在视觉信息缺失的极端环境中。DeepVIO(deep learning visual inertial odometry)(Han等,2019)将双目图像和惯性数据集成到一个无监督学习框架中,用特有损失进行训练,可以在全局范围内重建运动轨迹。CodeVIO(code visual-inertial odometry)(Zuo等,2020)提出一个轻量级紧耦合的深度网络和视觉惯性里程计系统,可以提供准确的状态估计和周围环境的稠密深度图。对于激光雷达里程计,LO-Net(lidar odometry networks)(Li等,2019a)用深度卷积网络端到端地训练激光雷达点云的特征选择、特征匹配和位姿估计,甚至可以用几何信息和语义信息提高系统精度,与LOAM算法的准确度相当。Kim等人(2021)在对不确定性进行建模时,引入无监督学习,不需要真值协方差等标签,并提出一种方法可以平衡不同传感器模型之间的不确定性,在某些数据集上的效果已优于传统方法。

图 9 基于监督学习的视觉里程计(Wang等,2017b)和基于无监督学习的视觉里程计(Zhou等,2017)典型结构
Fig. 9 The typical structure of supervised learning of visual odometry (Wang et al., 2017b) and unsupervised learning of visual odometry(Zhou et al., 2017)
((a)supervised learning; (b)unsupervised learning)

在建图领域,深度学习已经完成了场景感知理解的体系构建。从几何地图到语义地图再到一般地图,深度学习都有所涉猎。在几何地图的深度表示中,基于监督的学习方法(Eigen等,2014Ummenhofer等,2017Liu等,2016)用带有深度标签的图像数据集训练深度神经网络来预测图像中每个像素的深度,相对传统基于结构的方法,虽然可以提高深度预测的准确性,但这种方法过于依赖模型训练,在新的场景中很难有效工作。基于无监督的学习方法(Godard等,2017Zhou等,2017)分别将空间一致性和时间一致性作为自监督信号,进行深度和自身运动估计,在双目和单目深度预测中取得了较好的效果,若能加入更多的附加约束,应该可以更好地回复网络参数,提升深度预测性能。对于体素这一几何特征,SurfaceNet(Ji等,2017)提出一种多视图立体视觉的网络表面学习框架,可以直接学习表面结构的照片一致性和几何关系,通过学习预测体素的置信度,进一步确定它是否在表面上来重建场景的3D表面,虽然可以将其准确重建,但缺乏更进一步的后处理方法来进一步提高精度。开创性工作PointNet(Charles等,2017)致力于直接处理点特征,通过最大池化单个对称函数处理无序的点数据,用于目标分类、部分分割和场景语义理解。对于语义地图,DA-RNN(data associated recurrent neural networks)(Xiang和Fox,2017)提出数据关联递归神经网络,将递归模型引入语义分割框架中,从而学习多个视图框架上的时间连接,网络的输出与KinectFusion(Newcombe等,2011a)等建图技术结合,以便将语义信息注入到重建的3维场景中。但该结构主要关注对象类标记,对广泛的语义标记问题还需进一步验证。CodeSLAM(Bloesch等,2018)以单个图像的强度数据为条件,将场景进行隐式编码来构成通用详细的场景地图,这种基于关键帧的方法,可以通过引入运动估计先验来完成鲁棒跟踪。Roddick和Cipolla(2020)使用语义贝叶斯占用网络框架,提出简单统一的端到端深度学习框架,直接从单目图像中估计语义地图,可以为车道检测预测提供思路。SLOAM(semantic lidar odometry and mapping)(Chen等,2020d)利用自定义的虚拟现实工具来标记用于训练语义分割网络的激光雷达3D扫描,实现基于语义特征的位姿优化,可在手持设备或飞行器上稳定运行。

全局定位通过2D或3D场景模型提供的先验知识确定载体的绝对位姿,深度学习可用于解决此过程中的数据关联问题。2D-2D显式基于地图的定位NN-Net(N ranked references and N relative poses networks)(Laskar等,2017)直接从成对图像中回归相对位姿,隐式基于地图的定位PoseNet(Kendall等,2015)通过训练卷积神经网络,从单目图像中估计相机位姿,从而端到端地解决相机重定位问题。2D-3D基于描述子匹配的HF-Net(hierarchical features networks)(Sarlin等,2019)和基于场景坐标回归的定位(Bui等,2018),用深度学习的方法恢复2D图像在3D场景模型中的相机位姿。激光雷达3D-3D的定位L3-Net(Lu等,2019)使用PointNet(Charles等,2017)处理点云数据以提取编码某些有用属性的特征描述子,并通过递归神经网络建模动力学模型,用最小化点云输入和3D地图之间的匹配距离来优化预测位姿和真实值之间的损失,进而完成基于学习激光雷达定位框架。

SLAM系统中的其他模块:确保相机运动和场景几何的局部一致性的局部优化(Clark等,2018Tang和Tan,2018);在全局范围内限制轨迹漂移的全局优化(Zhou等,2020Czarnowski等,2020);减轻系统漂移误差的关键帧(Sheng等,2019)和回环检测(Süenderhauf等,2015Gao和Zhang,2017);以及对SLAM系统提供置信度度量的不确定估计(Wang等,2018),都有相应的深度学习方案提出。

3.3 融合物理信息

SLAM算法一般部署在特定环境中的特定机器人平台上,来自于环境和机器人平台自身的物理约束可以为状态估计提供有效信息。利用物理信息辅助SLAM任务中位姿估计主要有两类方法(左星星,2021):一类方法直接使用传感器测量相应的物理量,如气压(Britcher和Bergbreiter,2021)、高度(Jang和Kim,2019)和物理接触(Hartley等,2020)等;另一类方法在没有直接的传感器测量情况下,从背景知识出发间接制定约束条件,如地形(Sawa等,2018Xu等,2019b)和推进力(Nisar等,2019)等。

对于直接使用传感器测量值,Britcher和Bergbreiter(2021)评估了气压差测量作为小型四旋翼地面、天花板和墙壁感应手段的可行性,扩展现有的地面效应推理模型,推导模型来预测由于悬停和倾斜地面效应引起的压力变化,从而增强对相关物体的检测。Jang和Kim(2019)针对未知水下环境,融合单波束声学高度计、多普勒测速仪或IMU,提出具有双线性面板模型的测深SLAM在构建地图时需要的存储空间更小,计算量更低。Hartley等人(2020)使用不变扩展卡尔曼滤波器(invariant extended Kalman filter,InEKF)融合IMU和接触传感器的测量进行腿足机器人的状态估计, 实现接触—惯性动力学与前向运动相结合。

对于传感器不能直接测量的物理量,可以建立相应的物理模型来提供额外的约束以正则化状态估计(左星星,2021)。VIMO(visual inertial model-based odometry)(Nisar等,2019)提出一种相对运动约束,结合机器人动力学和预积分残差中的外力,以减小模型预测运动与实际运动之间的差异。Zhang等人(2021)提出融合环境物理信息的运动流形,Zuo等人(2019b)融合机器人物理信息的运动学参数进行位姿估计,这两种解决方案提升了机器人在复杂大规模真实环境中的位姿估计精度。

4 问题与挑战

回顾本文分析的SLAM解决方案,可概括出一种多源融合SLAM框架(如图 10所示),并将其分为3个层面。第1层面的多传感器融合,结合应用场景以及每种传感器的优缺点选择不同的传感器组合方式;第2层面的多特征基元融合,对图像的点、线、面特征及像素灰度信息进行提取处理,对于激光雷达点云还可得到3维的线、面和体素特征等,将不同的特征基元进行有效融合来满足应用需求;第3层面的多维度信息融合,将图像和点云通过学习方法得到的语义信息与传统的几何信息以及通过其他传感器获取的环境及平台自身的物理信息进行融合,以适应具有挑战性的复杂环境。

图 10 一种多源融合SLAM框架
Fig. 10 A frame of multi-source fusion SLAM

同时满足上述3个层面的多源融合SLAM,一般基于多传感器融合,针对不同传感器的特征,使用不同的方法进行处理融合。Lvio-Fusion(Jia等,2021)基于图优化融合双目相机、激光雷达、IMU和GPS,使用轻量级深度强化学习方法调整每个因子的权重,实现了高精度、高鲁棒的城市SLAM框架。Du等人(2020)根据沙漠蚂蚁导航原理,观察光线偏振,设计仿生偏振天窗传感器,结合激光雷达和里程计实现位置、方向和地图的估计,并提高定向精度。Kimera(Rosinol等,2020)基于相机与IMU,融合学习方法的3D几何与语义重建,为视觉惯性里程计、SLAM、3D重建和分割等领域的研究人员提供了很好的学习案例。

通过对多源融合SLAM的梳理,可知尽管它在过去几十年中取得了重大进展,但仍有许多挑战需要应对,受篇幅限制本文仅列出部分开放性挑战和问题供大家讨论:

1) 合理融合多传感器的测量往往能提升系统性能,但在融合之前需要标定不同传感器之间的相对位姿变换与时间戳,一般会选择离线标定,然而在使用过程中,传感器间的外参由于机械变形及外界物理环境的变化,很有可能发生改变。虽然有在估计器中进行在线外参标定的解决方案,但其容易受到退化运动的影响而失效(左星星,2021)。故而需要开发鲁棒适应性强的实时在线多传感器外参标定、时间同步方法。

2) 在具有挑战性的动态复杂环境条件下,如变化的光照、剧烈运动、开阔场地和缺乏纹理的场景等,需要长期稳健安全的算法来支撑,其应该具备有效表示和实时检测跟踪运动物体的能力,以及将低维几何特征与多维几何特征相结合的能力。在此过程中,通用鲁棒的几何特征提取、参数化、数据关联、耦合方法以及可信度估计等显得尤为重要。

3) 在融合几何信息和语义信息、传统方法与学习方法、外界与自身物理信息等多维度信息时,大量的数据处理将非常耗时,而且对计算性能有一定要求,如何构建轻量级、紧凑部署和快速响应的SLAM系统仍是一个挑战。此外,能否构建出统一的、适应性极强的SLAM框架来应对不断出现的新传感器、新表示方法和新学习方法,是一个值得探究的问题。

5 结语

本文从多传感器融合、多特征基元融合和多维度信息融合3个融合层面对近年来的多源融合SLAM进行系统性分析与回顾,提出的多源融合框架便于研究者宏观掌握SLAM的发展脉络及未来方向。重点介绍了常用的传感器之间的组合方式、低维几何特征与多维几何特征的结合、几何信息与语义信息的融合、学习方法与传统方法的融合等多源融合SLAM系统,并对各方法的优缺点进行简要阐述。最后概括了当前多源融合SLAM仍然存在的问题与面临的挑战,在多传感器在线标定、适应高动态复杂环境、高层次语义信息理解和资源有效合理分配等方面仍然需要进一步研究。未来的多学科交叉技术将对解决多源融合SLAM问题有所帮助。

参考文献

  • Almalioglu Y, Turan M, Lu C X, Trigoni N, Markham A. 2021. Milli-RIO: Ego-motion estimation with low-cost millimetre-wave radar. IEEE Sensors Journal, 21(3): 3314-3323 [DOI:10.1109/JSEN.2020.3023243]
  • Arndt C, Sabzevari R and Civera J. 2020. From points to planes-adding planar constraints to monocular SLAM factor graphs//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, USA: IEEE: 4917-4922[DOI: 10.1109/IROS45743.2020.9340805]
  • Bao S Y and Savarese S. 2011. Semantic structure from motion//CVPR 2011. Colorado Springs, USA: IEEE: 2025-2032[DOI: 10.1109/CVPR.2011.5995462]
  • Behley J, Garbade M, Milioto A, Quenzel J, Behnke S, Stachniss C and Gall J. 2019. SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences//Proceedings of 2019 IEEE/CVFInternational Conference on Computer Vision. Seoul, Korea(South): 9296-9306[DOI: 10.1109/ICCV.2019.00939]
  • Berger C. 2013. Toward rich geometric map for SLAM: online detection of planes in 2D lidar. Journal of Automation Mobile Robotics and Intelligent Systems, 7(1): 35-41
  • Bescos B, Fácil J M, Civera J, Neira J. 2018. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Robotics and Automation Letters, 3(4): 4076-4083 [DOI:10.1109/LRA.2018.2860039]
  • Bloesch M, Czarnowski J, Clark R, Leutenegger S and Davison A J. 2018. CodeSLAM-learning a compact, optimisable representation for dense visual SLAM//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 2560-2568[DOI: 10.1109/CVPR.2018.00271]
  • Britcher V, Bergbreiter S. 2021. Use of a MEMS differential pressure sensor to detect ground, ceiling, and walls on small quadrotors. IEEE Robotics and Automation Letters, 6(3): 4568-4575 [DOI:10.1109/LRA.2021.3068661]
  • Brostow G J, Shotton J, Fauqueur J and Cipolla R. 2008. Segmentation and recognition using structure from motion point clouds//Proceedings of the 10th European Conference on Computer Vision. Marseille, Framce: Springer: 44-57[DOI: 10.1007/978-3-540-88682-2_5]
  • Bui M, Albarqouni S, Ilic S and Navab N. 2018. Scene coordinate and correspondence learning for image-based localization. [EB/OL]. [2021-07-05]. https://arxiv.org/pdf/1805.08443.pdf
  • Burschka D and Hager G D. 2004. V-GPS(SLAM): vision-based inertial system for mobile robots//Proceedings of 2004 IEEE International Conference on Robotics and Automation. New Orleans, USA: IEEE: 409-415[DOI: 10.1109/ROBOT.2004.1307184]
  • Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, Reid I, Leonard J J. 2016. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Transactions on Robotics, 32(6): 1309-1332 [DOI:10.1109/TRO.2016.2624754]
  • Campos C, Elvira R, Rodríguez J J G, Montiel J M M and Tardós J D. 2021. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Transactions on Robotics[DOI: 10.1109/TRO.2021.3075644]
  • Camurri M, Ramezani M, Nobili S, Fallon M. 2020. Pronto: a multi-sensor state estimator for legged robots in real-world scenarios. Frontiers in Robotics and AI, 7: 68 [DOI:10.3389/frobt.2020.00068]
  • Charles R Q, Su H, Mo K C and Guibas L J. 2017. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 77-85[DOI: 10.1109/CVPR.2017.16]
  • Chatfield A B. 1997. Fundamentals of High Accuracy Inertial Navigation. Progress in Astronautics and Aeronautics: 15-32[DOI: 10.2514/4.866463]
  • Chen C B, Tian Y Y, Lin L, Chen S F, Li H W, Wang Y X, Su K X. 2020a. Obtaining world coordinate information of UAV in GNSS denied environments. Sensors, 20(8): #2241 [DOI:10.3390/s20082241]
  • Chen C H, Lu X X, Markham A and Trigoni N. 2018. IONet: learning to cure the curse of drift in inertial odometry//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI: 6468-6476
  • Chen C H, Wang B, Lu C X, Trigoni N and Markham A. 2020b. A survey on deep learning for localization and mapping: towards the age of spatial machine intelligenc[EB/OL]. [2020-06-22]. https://arxiv.org/pdf/2006.12567v1.pdf
  • Chen S L, Nan L L, Xia R B, Zhao J B, Wonka P. 2020c. PLADE: a plane-based descriptor for point cloud registration with small overlap. IEEE Transactions on Geoscience and Remote Sensing, 58(4): 2530-2540 [DOI:10.1109/TGRS.2019.2952086]
  • Chen S W, Nardari G V, Lee E S, Qu C, Liu X, Romero R A F, Kumar V. 2020d. SLOAM: semantic lidar odometry and mapping for forest inventory. IEEE Robotics and Automation Letters, 5(2): 612-619 [DOI:10.1109/LRA.2019.2963823]
  • Clark R, Bloesch M, Czarnowski J, Leutenegger S and Davison A J. 2018. Learning to solve nonlinear least squares for monocular stereo//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 291-306[DOI: 10.1007/978-3-030-01237-3_18]
  • Czarnowski J, Laidlow T, Clark R, Davison A J. 2020. DeepFactors: real-time probabilistic dense monocular SLAM. IEEE Robotics and Automation Letters, 5(2): 721-728 [DOI:10.1109/LRA.2020.2965415]
  • Davison A J. 2003. Real-time simultaneous localisation and mapping with a single camera//Proceedings of the 9th IEEE International Conference on Computer Vision. Nice, France: IEEE: 1403-1410[DOI: 10.1109/ICCV.2003.1238654]
  • Deschaud J E. 2018. IMLS-SLAM: scan-to-model matching based on 3D data//Processings of 2018 IEEE International Conference on Robotics and Automation. Brisbane, Australia: IEEE: 2480-2485[DOI: 10.1109/ICRA.2018.8460653]
  • Dissanayake G, Huang S D, Wang Z and Ranasinghe R. 2011. A review of recent developments in simultaneous localization and mapping//Proceedings of the 6th International Conference on Industrial and Information Systems. Kandy, Sri Lanka: IEEE: 477-482[DOI: 10.1109/ICIINFS.2011.6038117]
  • Dong J M, Fei X H and Soatto S. 2017. Visual-inertial-semantic scene representation for 3D object detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 3567-3577[DOI: 10.1109/CVPR.2017.380]
  • Du T, Zeng Y H, Yang J, Tian C Z, Bai P F. 2020. Multi-sensor fusion SLAM approach for the mobile robot with a bio-inspired polarised skylight sensor. IET Radar, Sonar and Navigation, 14(12): 1950-1957 [DOI:10.1049/iet-rsn.2020.0260]
  • Dubé R, Cramariuc A, Dugas D, Nieto J, Siegwart R and Cadena C. 2018. SegMap: 3D segment mapping using data-driven descriptors[EB/OL]. [2020-06-05]. https://arxiv.org/pdf/1804.09557.pdf
  • Durrant-Whyte H, Bailey T. 2006. Simultaneous localization and mapping: part I. IEEE Robotics and Automation Magazine, 13(2): 99-110 [DOI:10.1109/MRA.2006.1638022]
  • Eigen D, Puhrsch C and Fergus R. 2014. Depth map prediction from a single image using a multi-scale deep network//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: ACM: 2366-2374
  • Engel J, Koltun V, Cremers D. 2018. Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3): 611-625 [DOI:10.1109/TPAMI.2017.2658577]
  • Engel J, Schöps T and Cremers D. 2014. LSD-SLAM: large-scale direct monocular SLAM//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 834-849[DOI: 10.1007/978-3-319-10605-2_54]
  • Engel J, Stückler J and Cremers D. 2015. Large-scale direct SLAM with stereo cameras//Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg, Germany: IEEE: 1935-1942[DOI: 10.1109/IROS.2015.7353631]
  • Enqvist O, Kahl F and Olsson C. 2011. Non-sequential structure from motion//Proceedings of 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). Barcelona, Spain: IEEE: 264-271[DOI: 10.1109/ICCVW.2011.6130252]
  • Forster C, Carlone L, Dellaert F, Scaramuzza D. 2017a. On-manifold preintegration for real-time visual——inertial odometry. IEEE Transactions on Robotics, 33(1): 1-21 [DOI:10.1109/TRO.2016.2597321]
  • Forster C, Pizzoli M and Scaramuzza D. 2014. SVO: Fast semi-direct monocular visual odometry//Proceedings of 2014 IEEE International Conference on Robotics and Automation. Hong Kong, China: IEEE: 15-22[DOI: 10.1109/ICRA.2014.6906584]
  • Forster C, Zhang Z C, Gassner M, Werlberger M, Scaramuzza D. 2017b. SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33(2): 249-265 [DOI:10.1109/TRO.2016.2623335]
  • Gálvez-López D, Tardos J D. 2012. Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics, 28(5): 1188-1197 [DOI:10.1109/TRO.2012.2197158]
  • Gao X, Wang R, Demmel N and Cremers D. 2018. LDSO: direct sparse odometry with loop closure//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 2198-2204[DOI: 10.1109/IROS.2018.8593376]
  • Gao X, Zhang T. 2017. Unsupervised learning to detect loops using deep neural networks for visual SLAM system. Autonomous Robots, 41(1): 1-18 [DOI:10.1007/s10514-015-9516-2]
  • Gao X, Zhang T. 2019. Introduction to Visual SLAM: From Theory to Practice. 2nd Edition. Beijing: Publishing House of Electronics Industry (高翔, 张涛. 2019. 视觉SLAM十四讲: 从理论到实践. 2版. 北京: 电子工业出版社)
  • Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V and Garcia-Rodriguez J. 2017. A review on deep learning techniques applied to semantic segmentation[EB/OL]. [2020-06-05]. https://arxiv.org/pdf/1704.06857.pdf
  • Geneva P, Eckenhoff K, Yang Y L and Huang G Q. 2018. LIPS: LiDAR-inertial 3D plane SLAM//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 123-130[DOI: 10.1109/IROS.2018.8594463]
  • Godard C, Mac Aodha O and Brostow G J. 2017. Unsupervised monocular depth estimation with left-right consistency//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6602-6611[DOI: 10.1109/CVPR.2017.699]
  • Gomez-Ojeda R, Briales J and Gonzalez-Jimenez J. 2016. PL-SVO: semi-direct monocular visual odometry by combining points and line segments//Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon, Korea(South): IEEE: 4211-4216[DOI: 10.1109/IROS.2016.7759620]
  • Gomez-Ojeda R and Gonzalez-Jimenez J. 2016. Robust stereo visual odometry through a probabilistic combination of points and line segments//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 2521-2526[DOI: 10.1109/ICRA.2016.7487406]
  • Graeter J, Wilczynski A and Lauer M. 2018. LIMO: lidar-monocular visual odometry//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 7872-7879[DOI: 10.1109/IROS.2018.8594394]
  • Grinvald M, Furrer F, Novkovic T, Chung J J, Cadena C, Siegwart R, Nieto J. 2019. Volumetric instance-aware semantic mapping and 3D object discovery. IEEE Robotics and Automation Letters, 4(3): 3037-3044 [DOI:10.1109/LRA.2019.2923960]
  • Han L M, Lin Y M, Du G G and Lian S G. 2019. DeepVIO: self-supervised deep learning of monocular visual inertial odometry using 3D geometric constraints//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 6906-6913[DOI: 10.1109/IROS40897.2019.8968467]
  • Hartley R, Ghaffari M, Eustice R M, Grizzle J W. 2020. Contact-aided invariant extended Kalman filtering for robot state estimation. The International Journal of Robotics Research, 39(4): 402-430 [DOI:10.1177/0278364919894385]
  • Hess W, Kohler D, Rapp H and Andor D. 2016. Real-time loop closure in 2D LIDAR SLAM//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 1271-1278[DOI: 10.1109/ICRA.2016.7487258]
  • Huang G Q. 2019. Visual-inertial navigation: a concise review//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 9572-9582[DOI: 10.1109/ICRA.2019.8793604]
  • Jang J and Kim J. 2019. Dynamic grid adaptation for panel-based bathymetric SLAM//Proceedings of 2019 IEEE Underwater Technology (UT). Kaohsiung, China: 1-4[DOI: 10.1109/UT.2019.8734360]
  • Ji M Q, Gall J, Zheng H T, Liu Y B and Fang L. 2017. SurfaceNet: an end-to-end 3D neural network for multiview stereopsis//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2326-2334[DOI: 10.1109/ICCV.2017.253]
  • Jia Y P, Luo H Y, Zhao F, Jiang G L, Li Y H, Yan J Q, Jiang Z Q and Wang Z T. 2021. Lvio-fusion: a self-adaptive multi-sensor fusion SLAM framework using actor-critic method[EB/OL]. [2021-06-05]. https://arxiv.org/pdf/2106.06783.pdf
  • Ke T, Wu K J and Roumeliotis S I. 2019. RISE-SLAM: a resource-aware inverse schmidt estimator for SLAM//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 354-361[DOI: 10.1109/IROS40897.2019.8967892]
  • Kendall A, Grimes M and Cipolla R. 2015. PoseNet: a convolutional network for real-time 6-DOF camera relocalization//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 2938-2946[DOI: 10.1109/ICCV.2015.336]
  • Khan M S A, Chowdhury S S, Niloy N, Aurin F T Z and Ahmed T. 2018. Sonar-based SLAM using occupancy grid mapping and dead reckoning//TENCON 2018-2018 IEEE Region 10 Conference. Jeju, Korea(South): IEEE: 1707-1712[DOI: 10.1109/TENCON.2018.8650124]
  • Khattak S, Nguyen H, Mascarich F, Dang T and Alexis K. 2020. Complementary multi-modal sensor fusion for resilient robot pose estimation in subterranean environments//Proceedings of 2020 International Conference on Unmanned Aircraft Systems. Athens, Greece: IEEE: 1024-1029[DOI: 10.1109/ICUAS48674.2020.9213865]
  • Kim P, Lee H, Kim H J. 2019. Autonomous flight with robust visual odometry under dynamic lighting conditions. Autonomous Robots, 43(6): 1605-1622 [DOI:10.1007/s10514-018-9816-4]
  • Kim Y, Yoon S, Kim S, Kim A. 2021. Unsupervised balanced covariance learning for visual-inertial sensor fusion. IEEE Robotics and Automation Letters, 6(2): 819-826 [DOI:10.1109/LRA.2021.3051571]
  • Klein G and Murray D. 2007. Parallel tracking and mapping for small AR workspaces//The 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara, Japan: IEEE: 225-234[DOI: 10.1109/ISMAR.2007.4538852]
  • Konecny J, Prauzek M, Hlavica J. 2016. ICP algorithm in mobile robot navigation: analysis of computational demands in embedded solutions. IFAC-PapersOnLine, 49(25): 396-400 [DOI:10.1016/j.ifacol.2016.12.079]
  • Konolige K, Agrawal M and Solà J. 2010. Large-scale visual odometry for rough terrain//Kaneko M, Nakamura Y, eds. Robotics Research. Berlin: Springer: 201-212[DOI: 10.1007/978-3-642-14743-2_18]
  • Laskar Z, Melekhov I, Kalia S and Kannala J. 2017. Camera relocalization by computing pairwise relative poses using convolutional neural network//Proceedings of 2017 IEEE International Conference on Computer Vision Workshops. Venice, Italy: IEEE: 920-929[DOI: 10.1109/ICCVW.2017.113]
  • Lee S H, Civera J. 2019. Loosely-coupled semi-direct monocular slam. IEEE Robotics and Automation Letters, 4(2): 399-406 [DOI:10.1109/LRA.2018.2889156]
  • Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P. 2015. Keyframe-based visual-inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 34(3): 314-334 [DOI:10.1177/0278364914554813]
  • Levinson J and Thrun S. 2010. Robust vehicle localization in urban environments using probabilistic maps//Proceedings of 2010 IEEE International Conference on Robotics and Automation. Anchorage, USA: IEEE: 4372-4378[DOI: 10.1109/ROBOT.2010.5509700]
  • Li C, Xiao H, Tateno K, Tombari F, Navab N and Hager G D. 2016. Incremental scene understanding on dense SLAM//Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon, Korea(South): IEEE: 574-581[DOI: 10.1109/IROS.2016.7759111]
  • Li H A, Zhao J, Bazin J C and Liu Y H. 2020a. Quasi-globally optimal and near/true real-time vanishing point estimation in manhattan world. IEEE Transactions on Pattern Analysis and Machine Intelligence[DOI: 10.1109/TPAMI.2020.3023183]
  • Li Q, Chen S Y, Wang C, Li X, Wen C L, Cheng M and Li J. 2019a. LO-Net: deep real-time lidar odometry//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 8465-8474[DOI: 10.1109/CVPR.2019.00867]
  • Li S P, Zhang T, Gao X, Wang D, Xiao Y. 2019b. Semi-direct monocular visual and visual-inertial SLAM with loop closure detection. Robotics and Autonomous Systems, 112: 201-210 [DOI:10.1016/j.robot.2018.11.009]
  • Li X, Li Y Y, Örnek E P, Lin J L, Tombari F. 2020b. Co-planar parametrization for stereo-SLAM and visual-inertial odometry. IEEE Robotics and Automation Letters, 5(4): 6972-6979 [DOI:10.1109/LRA.2020.3027230]
  • Lianos K N, Schönberger J L, Pollefeys M and Sattler T. 2018. VSO: visual semantic odometry//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 246-263[DOI: 10.1007/978-3-030-01225-0_15]
  • Liu F Y, Shen C H, Lin G S, Reid I. 2016. Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10): 2024-2039 [DOI:10.1109/TPAMI.2015.2505283]
  • Liu H M, Chen M Y, Zhang G F, Bao H J and Bao Y Z. 2018. ICE-BA: incremental, consistent and efficient bundle adjustment for visual-inertial SLAM//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 1974-1982[DOI: 10.1109/CVPR.2018.00211]
  • Liu T A, Lin H Y and Lin W Y. 2019. InertialNet: toward robust SLAM via visual inertial measurement//Proceedings of 2019 IEEE Intelligent Transportation Systems Conference (ITSC). Auckland, New Zealand: IEEE: 1311-1316[DOI: 10.1109/ITSC.2019.8917003]
  • Long X X, Cheng X J, Zhu H, Zhang P J, Liu H M, Li J, Zheng L T, Hu Q Y, Liu H, Cao X, Yang R G, Wu Y H, Zhang G F, Liu Y B, Xu K, Guo Y L, Chen B Q. 2021. Recent progress in 3D vision. Journal of Image and Graphics, 26(6): 1389-1428 (龙霄潇, 程新景, 朱昊, 张朋举, 刘浩敏, 李俊, 郑林涛, 胡庆拥, 刘浩, 曹汛, 杨睿刚, 吴毅红, 章国锋, 刘烨斌, 徐凯, 郭裕兰, 陈宝权. 2021. 三维视觉前沿进展. 中国图象图形学报, 26(6): 1389-1428) [DOI:10.11834/jig.210043]
  • Lu W X, Zhou Y, Wan G W, Hou S H and Song S Y. 2019. L3-Net: towards learning based LiDAR localization for autonomous driving//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 6382-6391[DOI: 10.1109/CVPR.2019.00655]
  • Lynen S, Achtelik M W, Weiss S, Chli M and Siegwart R. 2013. A robust and modular multi-sensor fusion approach applied to MAV navigation//Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan: IEEE: 3923-3929[DOI: 10.1109/IROS.2013.6696917]
  • Marzorati D, Matteucci M, Migliore D and Sorrenti D G. 2007. Integration of 3D lines and points in 6DoF visual SLAM by uncertainty projective geometry//Proceedings of the European Conference on Mobile Robots. Freiburg, Germany: EMCR
  • McCormac J, Clark R, Bloesch M, Davison A and Leutenegger S. 2018. Fusion++: volumetric object-level SLAM//Proceedings of 2018 International Conference on 3D Vision (3DV). Verona, Italy: IEEE: 32-41[DOI: 10.1109/3DV.2018.00015]
  • McCormac J, Handa A, Davison A and Leutenegger S. 2017. SemanticFusion: dense 3D semantic mapping with convolutional neural networks//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 4628-4635[DOI: 10.1109/ICRA.2017.7989538]
  • Meier K, Chung S J, Hutchinson S. 2018. Visual-inertial curve simultaneous localization and mapping: creating a sparse structured world without feature points. Journal of Field Robotics, 35(4): 516-544
  • Mourikis A I and Roumeliotis S I. 2007. A multi-state constraint Kalman filter for vision-aided inertial navigation//Proceedings of 2007 IEEE International Conference on Robotics and Automation. Rome, Italy: IEEE: 3565-3572[DOI: 10.1109/ROBOT.2007.364024]
  • Mourikis A I, Trawny N, Roumeliotis S I, Johnson A E, Ansar A, Matthies L. 2009. Vision-aided inertial navigation for spacecraft entry, descent, and landing. IEEE Transactions on Robotics, 25(2): 264-280 [DOI:10.1109/TRO.2009.2012342]
  • Mur-Artal R, Montiel J M M, Tardós J D. 2015. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5): 1147-1163 [DOI:10.1109/TRO.2015.2463671]
  • Mur-Artal R, Tardós J D. 2017. ORB-SLAM2:an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33(5): 1255-1262 [DOI:10.1109/TRO.2017.2705103]
  • Narita G, Seno T, Ishikawa T and Kaji Y. 2019. PanopticFusion: online volumetric semantic mapping at the level of stuff and things//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 4205-4212[DOI: 10.1109/IROS40897.2019.8967890]
  • Newcombe R A, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison A J, Kohi P, Shotton J, Hodges S and Fitzgibbon A. 2011a. KinectFusion: real-time dense surface mapping and tracking//Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality. Basel, Switzer-land: IEEE: 127-136[DOI: 10.1109/ISMAR.2011.6092378]
  • Newcombe R A, Lovegrove S J and Davison A J. 2011b. DTAM: dense tracking and mapping in real-time//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 2320-2327[DOI: 10.1109/ICCV.2011.6126513]
  • Nicholson L, Milford M, Sünderhauf N. 2019. QuadricSLAM: dual quadrics from object detections as landmarks in object-oriented SLAM. IEEE Robotics and Automation Letters, 4(1): 1-8 [DOI:10.1109/LRA.2018.2866205]
  • Nisar B, Foehn P, Falanga D, Scaramuzza D. 2019. VIMO: simultaneous visual inertial model-based odometry and force estimation. IEEE Robotics and Automation Letters, 4(3): 2785-2792 [DOI:10.1109/LRA.2019.2918689]
  • Olid D, Fácil J M and Civera J. 2018. Single-view place recognition under seasonal changes. [EB/OL]. [2021-06-05]. https://arxiv.org/pdf/1808.06516.pdf
  • Pumarola A, Vakhitov A, Agudo A, Sanfeliu A and Moreno-Noguer F. 2017. PL-SLAM: real-time monocular visual SLAM with points and lines//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 4503-4508[DOI: 10.1109/ICRA.2017.7989522]
  • Qin T, Li P L, Shen S J. 2018. VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4): 1004-1020 [DOI:10.1109/TRO.2018.2853729]
  • Roddick T and Cipolla R. 2020. Predicting semantic map representations from images using pyramid occupancy networks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 11135-11144[DOI: 10.1109/CVPR42600.2020.01115]
  • Rosinol A, Abate M, Chang Y and Carlone L. 2020. Kimera: an open-source library for real-time metric-semantic localization and mapping//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Paris, France: IEEE: 1689-1696[DOI: 10.1109/ICRA40945.2020.9196885]
  • Rünz M and Agapito L. 2017. Co-fusion: real-time segmentation, tracking and fusion of multiple objects//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 4471-4478[DOI: 10.1109/ICRA.2017.7989518]
  • Runz M, Buffier M and Agapito L. 2018. MaskFusion: real-time recognition, tracking and reconstruction of multiple moving objects//Proceedings of 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Munich, Germany: IEEE: 10-20[DOI: 10.1109/ISMAR.2018.00024]
  • Salas-Moreno R F, Newcombe R A, Strasdat H, Kelly P H J and Davison A J. 2013. SLAM++: simultaneous localisation and mapping at the level of objects//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 1352-1359[DOI: 10.1109/CVPR.2013.178]
  • Sarlin P E, Cadena C, Siegwart R and Dymczyk M. 2019. From coarse to fine: robust hierarchical localization at large scale//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 12708-12717[DOI: 10.1109/CVPR.2019.01300]
  • Sawa T, Yanagi T, Kusayanagi Y, Tsukui S and Yoshida A. 2018. Seafloor mapping by 360 degree view camera with sonar supports//Proceedings of 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans. Kobe, Japan: IEEE: 1-4[DOI: 10.1109/OCEANSKOBE.2018.8559360]
  • Schops T, Schonberger J L, Galliani S Sattler T, Schindler K, Pollefeys M and Geiger A. 2017. A multi-view stereo benchmark with high-resolution images and multi-camera videos//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2538-2547[DOI: 10.1109/CVPR.2017.272]
  • Shamwell E J, Lindgren K, Leung S, Nothwang W D. 2020. Unsupervised deep visual-inertial odometry with online error correction for RGB-D imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(10): 2478-2493 [DOI:10.1109/TPAMI.2019.2909895]
  • Shan T X and Englot B. 2018. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 4758-4765[DOI: 10.1109/IROS.2018.8594299]
  • Shan T X, Englot B, Meyers D, Wang W, Ratti C and Rus D. 2020. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems Las Vegas, USA: IEEE: 5135-5142[DOI: 10.1109/IROS45743.2020.9341176]
  • Shan T X, Englot B Ratti C and Rus D. 2021. LVI-SAM: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. [EB/OL]. [2021-06-05]. https://arxiv.org/pdf/2104.10831.pdf
  • Shao W Z, Vijayarangan S, Li C and Kantor G. 2019. Stereo visual inertial LiDAR simultaneous localization and mapping//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 370-377[DOI: 10.1109/IROS40897.2019.8968012]
  • Sheng L, Xu D, Ouyang W L and Wang X G. 2019. Unsupervised collaborative learning of keyframe detection and visual odometry towards monocular deep SLAM//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea(South): IEEE: 4301-4310[DOI: 10.1109/ICCV.2019.00440]
  • Smith R C, Cheeseman P. 1986. On the representation and estimation of spatial uncertainty. The International Journal of Robotics Research, 5(4): 56-68 [DOI:10.1177/027836498600500404]
  • Strasdat H, Davison A J, Montiel J M M and Konolige K. 2011. Double window optimisation for constant time visual SLAM//Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE: 2352-2359[DOI: 10.1109/ICCV.2011.6126517]
  • Strasdat H, Montiel J M M and Davison A J. 2012. Visual SLAM: why filter? Image and Vision Computing, 30(2): 65-77[DOI: 10.1016/j.imavis.2012.02.009]
  • Sturm J, Engelhard N, Endres F, Burgard W and Cremers D. 2012. A benchmark for the evaluation of RGB-D SLAM systems//Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura-Algarve, Portugal: IEEE: 573-580[DOI: 10.1109/IROS.2012.6385773]
  • Süenderhauf N, Shirazi S, Jacobson A, Dayoub F and Milford M. 2015. Place recognition with convnet landmarks: viewpoint-robust, condition-robust, training-free//Hsu D, ed. Robotics: Science and Systems XI. Rome: Sapienza University of Rome: 1-10
  • Tang C Z and Tan P. 2018. BA-Net: dense bundle adjustment network. [EB/OL]. [2021-06-05]. https://arxiv.org/pdf/1806.04807.pdf
  • Tardif J P, George M, Laverne M, Kelly A and Stentz A. 2010. A new approach to vision-aided inertial navigation//Proceedings of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei, China: IEEE: 4161-4168[DOI: 10.1109/IROS.2010.5651059]
  • Tateno K, Tombari F, Laina I and Navab N. 2017. CNN-SLAM: real-time dense monocular SLAM with learned depth prediction//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6565-6574[DOI: 10.1109/CVPR.2017.695]
  • Tateno K, Tombari F and Navab N. 2015. Real-time and scalable incremental segmentation on dense SLAM//Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg, Germany: IEEE: 4465-4472[DOI: 10.1109/IROS.2015.7354011]
  • Thrun S, Burgard W, Fox D. 2005. Probabilistic Robotics. Cambridge: The MIT Press
  • Titterton D, Weston J. 2005. Strapdown inertial navigation technology-2nd edition-[Book review]. IEEE Aerospace and Electronic Systems Magazine, 20(7): 33-34 [DOI:10.1109/MAES.2005.1499250]
  • Trawny N and Roumeliotis S I. 2005. Indirect Kalman Filter for 3D Attitude Estimation. University of Minnesota, Department of Computer Science & Engineering. Technical Report Number 2005-002, Rec. 57. 1-25
  • Ummenhofer B, Zhou H Z, Uhrig J, Mayer N, Ilg E, Dosovitskiy A and Brox T. 2017. DeMoN: depth and motion network for learning monocular stereo//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, Hawaii, USA: IEEE: 5622-5631[DOI: 10.1109/CVPR.2017.596]
  • van Dinh N and Kim G W. 2020. Multi-sensor fusion towards VINS: a concise tutorial, survey, framework and challenges//Proceedings of 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). Busan, Korea(South): IEEE: 459-462[DOI: 10.1109/BigComp48618.2020.00-26]
  • Wald J, Tateno K, Sturm J, Navab N, Tombari F. 2018. Real-time fully incremental scene understanding on mobile platforms. IEEE Robotics and Automation Letters, 3(4): 3402-3409 [DOI:10.1109/LRA.2018.2852782]
  • Wang R, Schwörer M and Cremers D. 2017a. Stereo DSO: large-scale direct sparse visual odometry with stereo cameras//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 3923-3931[DOI: 10.1109/ICCV.2017.421]
  • Wang S, Clark R, Wen H K and Trigoni N. 2017b. DeepVO: towards end-to-end visual odometry with deep recurrent convolutional neural networks//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 2043-2050[DOI: 10.1109/ICRA.2017.7989236]
  • Wang S, Clark R, Wen H K, Trigoni N. 2018. End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks. The International Journal of Robotics Research, 37(4/5): 513-542 [DOI:10.1177/0278364917734298]
  • Wang Z Y, Zhang J H, Chen S Y, Yuan C E, Zhang J Q and Zhang J W. 2019. Robust high accuracy visual-inertial-laser SLAM system//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 6636-6641[DOI: 10.1109/IROS40897.2019.8967702]
  • Weiss S and Siegwart R. 2011. Real-time metric state estimation for modular vision-inertial systems//Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE: 4531-4537[DOI: 10.1109/ICRA.2011.5979982]
  • Whelan T, Leutenegger S, Salas-Moreno R F, Glocker B and Davison A J. 2015. ElasticFusion: dense SLAM without a pose graph. Robotics: Science and Systems, #11
  • Wisth D, Camurri M, Das S, Fallon M. 2021. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry. IEEE Robotics and Automation Letters, 6(2): 1004-1011 [DOI:10.1109/LRA.2021.3056380]
  • Wu D, Meng Y, Zhan K and Ma F. 2018. A LIDAR SLAM based on point-line features for underground mining vehicle//Proceedings of 2018 Chinese Automation Congress (CAC). Xi'an, China: IEEE: 2879-2883[DOI: 10.1109/CAC.2018.8623075]
  • Xiang Y and Fox D. 2017. DA-RNN: semantic mapping with data associated recurrent neural networks. [EB/OL]. [2021-06-05]. https://arxiv.org/pdf/1703.03098.pdf
  • Xu B B, Li W B, Tzoumanikas D, Bloesch M, Davison A and Leutenegger S. 2019a. MID-Fusion: octree-based object-level multi-instance dynamic SLAM//Proceedings of 2019 IEEE International Conference on Robotics and Automation. Montreal, Canada: IEEE: 5231-5237[DOI: 10.1109/ICRA.2019.8794371]
  • Xu C Y, Chen J W, Zhu H C, Liu H H, Lin Y. 2019b. Experimental research on seafloor mapping and vertical deformation monitoring for gas hydrate zone using nine-axis mems sensor tapes. IEEE Journal of Oceanic Engineering, 44(4): 1090-1101 [DOI:10.1109/JOE.2018.2859498]
  • Yang N, Von Stumberg L, Wang R and Cremers D. 2020. D3VO: deep depth, deep pose and deep uncertainty for monocular visual odometry//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 1278-1289[DOI: 10.1109/CVPR42600.2020.00136]
  • Yang S C, Scherer S. 2019. CubeSLAM: monocular 3-D object SLAM. IEEE Transactions on Robotics, 35(4): 925-938 [DOI:10.1109/TRO.2019.2909168]
  • Yang Y L, Geneva P, Eckenhoff K and Huang G Q. 2019a. Visual-inertial odometry with point and line features//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 2447-2454[DOI: 10.1109/IROS40897.2019.8967905]
  • Yang Y L, Geneva P, Zuo X X, Eckenhoff K, Liu Y and Huang G Q. 2019b. Tightly-coupled aided inertial navigation with point and plane features//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 6094-6100[DOI: 10.1109/ICRA.2019.8794078]
  • Ye H Y, Chen Y Y and Liu M. 2019. Tightly coupled 3D lidar inertial odometry and mapping//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 3144-3150[DOI: 10.1109/ICRA.2019.8793511]
  • Zhang G X, Lee J H, Lim J, Suh I H. 2015. Building a 3-D line-based map using stereo SLAM. IEEE Transactions on Robotics, 31(6): 1364-1377 [DOI:10.1109/TRO.2015.2489498]
  • Zhang J, Kaess M and Singh S. 2014. Real-time depth enhanced monocular odometry//Proceedings of 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. Chicago, USA: IEEE: 4973-4980[DOI: 10.1109/IROS.2014.6943269]
  • Zhang J, Kaess M and Singh S. 2016. On degeneracy of optimization-based state estimation problems//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 809-816[DOI: 10.1109/ICRA.2016.7487211]
  • Zhang J and Singh S. 2014. LOAM: lidar odometry and mapping in real-time//Robotics: Science and Systems. Berkeley: [s. n. ][DOI: 10.15607/RSS.2014.X.007]
  • Zhang J and Singh S. 2015. Visual-lidar odometry and mapping: low-drift, robust, and fast//Proceedings of 2015 IEEE International Conference on Robotics and Automation. Seattle, USA: IEEE: 2174-2181[DOI: 10.1109/ICRA.2015.7139486]
  • Zhang M M, Zuo X X, Chen Y M, Liu Y, Li M Y. 2021. Pose estimation for ground robots: on manifold representation, integration, reparameterization, and optimization. IEEE Transactions on Robotics, 37(4): 1081-1099 [DOI:10.1109/TRO.2020.3043970]
  • Zheng L T, Zhu C Y, Zhang J Z, Zhao H, Huang H, Niessner M, Xu K. 2019. Active scene understanding via online semantic reconstruction. Computer Graphics Forum, 38(7): 103-114 [DOI:10.1111/cgf.13820]
  • Zhou H Y, Yao Z, Lu M Q. 2021. UWB/lidar coordinate matching method with anti-degeneration capability. IEEE Sensors Journal, 21(3): 3344-3352 [DOI:10.1109/JSEN.2020.3023738]
  • Zhou H Z, Ummenhofer B, Brox T. 2020. DeepTAM: deep tracking and mapping with convolutional neural networks. International Journal of Computer Vision, 128(3): 756-769 [DOI:10.1007/s11263-019-01221-0]
  • Zhou H Z, Zou D P, Pei L, Ying R D, Liu P L, Yu W X. 2015. StructSLAM: visual SLAM with building structure lines. IEEE Transactions on Vehicular Technology, 64(4): 1364-1375 [DOI:10.1109/TVT.2015.2388780]
  • Zhou T H, Brown M, Snavely N and Lowe D G. 2017. Unsupervised learning of depth and ego-motion from video//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6612-6619[DOI: 10.1109/CVPR.2017.700]
  • Zhou Y and Tuzel O. 2018. VoxelNet: end-to-end learning for point cloud based 3D object detection//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 4490-4499[DOI: 10.1109/CVPR.2018.00472]
  • Zhu X X, Yu Y, Wang P F, Lin M J, Zhang H R and Cao Q X. 2019. A visual SLAM system based on the panoramic camera//Proceedings of 2019 IEEE International Conference on Real-time Computing and Robotics. Irkutsk, Russia: IEEE: 53-58[DOI: 10.1109/RCAR47638.2019.9044117]
  • Zou H, Chen C L, Li M X, Yang J F, Zhou Y X, Xie L H, Spanos C J. 2020. Adversarial learning-enabled automatic WiFi indoor radio map construction and adaptation with mobile robot. IEEE Internet of Things Journal, 7(8): 6946-6954 [DOI:10.1109/JIOT.2020.2979413]
  • Zou X, Xiao C S, Wen Y Q, Yuan H W. 2020. Research of feature-based and direct methods VSLAM. Application Research of Computers, 37(5): 1281-1291 (邹雄, 肖长诗, 文元桥, 元海文. 2020. 基于特征点法和直接法VSLAM的研究. 计算机应用研究, 37(5): 1281-1291) [DOI:10.19734/j.issn.1001-3695.2018.11.0789]
  • Zuo X X. 2021. Robust and Intelligent Multi-Source Fusion SLAM Technology. Hangzhou: Zhejiang University (左星星. 2021. 面向鲁棒和智能化的多源融合SLAM技术研究. 杭州: 浙江大学)
  • Zuo X X, Geneva P, Lee W, Liu Y and Huang G Q. 2019a. LIC-fusion: LiDAR-inertial-camera odometry//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 5848-5854[DOI: 10.1109/IROS40897.2019.8967746]
  • Zuo X X, Merrill N, Li W, Liu Y, Pollefeys M and Huagn G Q. 2021. CodeVIO: visual-inertial odometry with learned optimizable dense depth//Proceedings of 2021 IEEE International Conference on Robotics and Automation. Xi'an, China: IEEE: 14382-14388[DOI: 10.1109/ICRA48506.2021.9560792]
  • Zuo X X, Xie X J, Liu Y and Huang G Q. 2017. Robust visual SLAM with point and line features//Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vancouver, Canada: IEEE: 1775-1782[DOI: 10.1109/IROS.2017.8205991]
  • Zuo X X, Yang Y L, Geneva P, Lv J J, Liu Y, Huang G Q and Pollefeys M. 2020. LIC-fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, USA: IEEE: 5112-5119[DOI: 10.1109/IROS45743.2020.9340704]
  • Zuo X X, Zhang M M, Chen Y M, Liu Y, Huang G Q and Li M Y. 2019b. Visual-inertial localization for skid-steering robots with kinematic constraints. [EB/OL]. [2021-06-05]. https://arxiv.org/pdf/1911.05787.pdf