Review on visual-inertial navigation and positioning technology
- Vol. 26, Issue 6, Pages: 1470-1482(2021)
Received:31 December 2020,
Revised:2021-1-12,
Accepted:19 January 2021,
Published:16 June 2021
DOI: 10.11834/jig.200863
移动端阅览

浏览全部资源
扫码关注微信
Received:31 December 2020,
Revised:2021-1-12,
Accepted:19 January 2021,
Published:16 June 2021
移动端阅览
视觉—惯性导航定位技术是一种利用视觉传感器和惯性传感器实现载体的自定位和周围环境感知的无源导航定位方式,可以在全球定位系统(global positioning system,GPS)拒止环境下实现载体6自由度位姿估计。视觉和低精度惯性传感器具有体积小和价格低的优势,得益于二者在导航定位任务中的互补特性,视觉—惯性导航系统(visual inertial navigation system,VINS)引起了极大关注,在移动端的虚拟现实(virtual reality,VR)、增强现实(augmented reality,AR)以及无人系统的自主导航任务中发挥了重要作用,具有重要的理论研究价值和实际应用需求。本文介绍视觉—惯性导航系统,总结概括该系统中初始化、视觉前端处理、状态估计、地图的构建与维护以及信息融合等关键技术的研究进展。对非理想环境下及基于学习方法的视觉—惯性导航定位算法等热点问题进行综述,总结用于算法评测的方法及标准数据集,阐述该技术在实际应用中所面临的主要问题,并针对这些问题对该领域未来的发展趋势进行展望。
Visual-inertial navigation and positioning technology is a passive navigation method
which can realize the estimation of ego-motion and the perception of the surrounding environment. In particular
this method can realize six-degree of freedom(DOF) pose estimation of the carrier in GPS-denied environments
such as indoor and underwater environment
and even play a positive role in space exploration. In addition
from a biological point of view
visual-inertial navigation is a bionic navigation method because humans and animals realize their own navigation and positioning through visual and motion perception. The visual-inertial integrated navigation has significant advantages. First
these sensors have the advantages of small size and low cost. Second
different from active navigation
visual-inertial navigation system (VINS) does not rely on external auxiliary devices. The navigation and positioning function can be realized independently without exchanging information with the external environment. Finally
the visual and inertial sensors have very complementary characteristics. For example
the output frequency of visual navigation is low
and no accumulated error is found when it is stationary; it is susceptible to changes in the external environment and cannot adapt to the situation of fast movement. At the same time
the output frequency of inertial navigation is high
and it is robust to the changes in the external environment. It can accurately capture the information of the rapid movement of the carrier
but it has an accumulated error. VINS plays an important role in mobile virtual reality
augmented reality
and autonomous navigation tasks of unmanned system
with an important theoretical research value and practical application requirements. In recent years
the visual-inertial navigation technology has developed rapidly
and many excellent works have emerged and improved the theory of visual-inertial navigation technology. At present
the structure of the algorithm is relatively fixed
and the positioning accuracy of the state-of-the-art VINS in some small-scale structured scenes is as high as centimeter. However
it faces many problems when applied in many complex practical scenes. On the one hand
the real-time performance of the system is difficult to satisfy because visual image processing and back-end optimization bring a large computation burden. Meanwhile
the scale of mapping is a challenge to memory consumption. On the other hand
the performance of this technology in some low-texture
dynamic illumination
large-scale
and dynamic scenes is poor. These complex environments are challenging to the stability of VINS
thereby acting as the major obstacles to the large-scale application of VINS at present. These complex environments directly affect the processing results of the visual front-end and are often difficult to handle by traditional geometric methods. With the strong ability of deep learning technology in image processing
some researchers attempt to use deep learning to replace the traditional image processing technology and even abandon the traditional VINS framework
thereby directly estimating poses with the end-to-end framework. The learning-based method can use the rich semantic information in the image and has more advantages in the complex environment
such as dynamic scene. The purpose of this article is to help those who are interested in VINS to quickly understand the current state of research in this field
as well as the future research directions of interest. The VINS is introduced
and then the research progress of the key technologies in the system
such as initialization
visual front-end processing
state estimation
map construction and maintenance
and information fusion is summarized. In addition
some hot issues
such as visual-inertial navigation algorithm in non-ideal environment and learning-based localization algorithm
are reviewed. The standard datasets used for algorithm evaluation are summarized
and the future development trend of this field is prospected.
Almalioglu Y, Turan M, Sari A E, Saputra M R U, de Gusmão P P B, Markham A and Trigoni N. 2019. SelfVIO: self-supervised deep monocular visual-inertial odometry and depth estimation[EB/OL]. [2020-07-23] . https://arxiv.org/pdf/1911.09968.pdf https://arxiv.org/pdf/1911.09968.pdf
Blanco-Claraco J L, Moreno-Dueñas F Á and González-Jiménez J. 2014. The Málaga urban dataset: high-rate stereo and LiDAR in a realistic urban scenario. The International Journal of Robotics Research, 33(2): 207-214[DOI:10.1177/0278364913507326]
Bloesch M, Burri M, Omari S, Hutter M and Siegwart R. 2017. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback. The International Journal of Robotics Research, 36(10): 1053-1072[DOI:10.1177/0278364917728574]
Brossard M, Bonnabel S and Barrau A. 2018. Unscented Kalman filter on Lie groups for visual inertial odometry//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 649-655[ DOI: 10.1109/IROS.2018.8593627 http://dx.doi.org/10.1109/IROS.2018.8593627 ]
Burri M, Nikolic J, GohlP, Schneider T, Rehder J, Omari S, Achtelik M W and Siegwart R. 2016. The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research, 35(10): 1157-1163[DOI:10.1177/0278364915620033]
Campos C, Elvira R, Rodríguez J J G, Montiel J M M and Tardós J D. 2020. ORB-SLAM3: an accurate open-source library for visual, visual-inertial and multi-map SLAM[EB/OL]. [2020-07-23] . https://arxiv.org/pdf/2007.11898.pdf https://arxiv.org/pdf/2007.11898.pdf
Carlevaris-Bianco N, Ushani A K and Eustice R M. 2016. University of Michigan North Campus long-term vision and lidar dataset. The International Journal of Robotics Research, 35(9): 1023-1035[DOI:10.1177/0278364915614638]
Chen C H, Rosa S, Miao Y S, Lu C X, Wu W, Markham A and Trigoni N. 2019. Selective sensor fusion for neural visual-inertial odometry//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and PatternRecognition. Long Beach, USA: IEEE: 10534-10543[ DOI: 10.1109/CVPR.2019.01079 http://dx.doi.org/10.1109/CVPR.2019.01079 ]
Chen W, Liu Z, Zhao H C, Zhou S B, Li H A and Liu Y H. 2020. CUHK-AHU dataset: promoting practical self-driving applications in the complex airport logistics, hill and urban environments//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, USA: [s.n.]
Clark R, Wang S, Wen H K, Markham A and Trigoni N. 2017. Vinet: visual-inertial odometry as a sequence-to-sequence learning problem[EB/OL]. [2020-07-23] . https://arxiv.org/pdf/1701.08376.pdf https://arxiv.org/pdf/1701.08376.pdf
Corke P, Lobo J and Dias J. 2007. An introduction to inertial and visual sensing. The International Journal of Robotics Research, 26(6): 519-535[DOI:10.1177/0278364907079279]
Cortés S, Solin A, Rahtu E and Kannala J. 2018. ADVIO: an authentic dataset for visual-inertial odometry//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 425-440[ DOI: 10.1007/978-3-030-01249-6_26 http://dx.doi.org/10.1007/978-3-030-01249-6_26 ]
Engel J, Koltun V and Cremers D. 2018. Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3): 611-625[DOI:10.1109/TPAMI.2017.2658577]
Ferrera M, Creuze V, Moras J and Trouvé-Peloux P. 2019. AQUALOC: an underwater dataset for visual-inertial-pressure localization. The International Journal of Robotics Research, 38(14): 1549-1559[DOI:10.1177/0278364919883346]
Forster C, Carlone L, Dellaert F and Scaramuzza D. 2015. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation//Proceedings of Robotics: Science and Systems. Rome, Italy: [s.n.]
Forster C, Carlone L, Dellaert F and Scaramuzza D. 2017. On-manifold preintegration for real-time visual-inertial odometry. IEEE Transactions on Robotics, 33(1): 1-21[DOI:10.1109/TRO.2016.2597321]
Geiger A, Lenz P, Stiller C and Urtasun R. 2013. Vision meets robotics: the KITTI dataset. The International Journal of Robotics Research, 32(11): 1231-1237[DOI:10.1177/0278364913491297]
Geneva P, Eckenhoff K, Lee W, Yang Y L and Huang G Q. 2020. Openvins: a research platform for visual-inertial estimation//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Paris, France: IEEE: 4666-4672[ DOI: 10.1109/ICRA40945.2020.9196524 http://dx.doi.org/10.1109/ICRA40945.2020.9196524 ]
Grupp M. 2017. EVO: python package for the evaluation of odometry and SLAM[EB/OL]. [2020-11-30] . https://michaelgrupp.github.io/evo/ https://michaelgrupp.github.io/evo/
Gu C J, Cong Y and Sun G. 2019. Environment driven underwater camera-IMU calibration for monocular visual-inertial SLAM//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 2405-2411[ DOI: 10.1109/ICRA.2019.8793577 http://dx.doi.org/10.1109/ICRA.2019.8793577 ]
Han L M, Lin Y M, Du G G and Lian S G. 2019. DeepVIO: self-supervised deep learning of monocular visual inertial odometry using 3D geometric constraints//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 6906-6913[ DOI: 10.1109/IROS40897.2019.8968467 http://dx.doi.org/10.1109/IROS40897.2019.8968467 ]
Hardt-Stremayr A and Weiss S. 2020. Monocular visual-inertial odometry in low-textured environments with smooth gradients: a fully dense direct filtering approach//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Paris, France: IEEE: 7837-7843[ DOI: 10.1109/ICRA40945.2020.9196881 http://dx.doi.org/10.1109/ICRA40945.2020.9196881 ]
He Y J, Zhao J, Guo Y, He W H and Yuan K. 2018. PL-VIO: tightly-coupled monocular visual-inertial odometry using point and line features. Sensors, 18(4): #1159[DOI:10.3390/s18041159]
Huai Z and Huang G Q. 2018. Robocentric visual-inertial odometry//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 6319-6326[ DOI: 10.1109/IROS.2018.8593643 http://dx.doi.org/10.1109/IROS.2018.8593643 ]
Huang G P, Mourikis A I and Roumeliotis S I. 2008. Analysis and improvement of the consistency of extended Kalman filter based SLAM//Proceedings of 2008 IEEE International Conference on Robotics and Automation. Pasadena, USA: IEEE: 473-479[ DOI: 10.1109/ROBOT.2008.4543252 http://dx.doi.org/10.1109/ROBOT.2008.4543252 ]
Huang G P, Mourikis A I and Roumeliotis S I. 2009. A first-estimates Jacobian EKF for improving SLAM consistency//Khatib O, Kumar V and Pappas G J (Eds.). Experimental Robotics: The 11th International Symposium. Berlin, Heidelberg, Germany: Springer: 373-382[ DOI: 10.1007/978-3-642-00196-3_43 http://dx.doi.org/10.1007/978-3-642-00196-3_43 ]
Huang G P, Mourikis A I and Roumeliotis S I. 2010. Observability-based rules for designing consistent EKF SLAM estimators. The International Journal of Robotics Research, 29(5): 502-528[DOI:10.1177/0278364909353640]
Irmisch P, Baumbach D and Ernst I. 2020. Robust visual-inertial odometry in dynamic environments using semantic segmentation for feature selection. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2(2): 435-442[DOI:10.5194/isprs-annals-V-2-2020-435-2020]
Jaekel J, Mangelson J G, Scherer S and Kaess M. 2020. A robust multi-stereo visual-inertial odometry pipeline//Proceedingsof 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, USA: IEEE, 4623-4630[ DOI: 10.1109/IROS45743.2020.9341604 http://dx.doi.org/10.1109/IROS45743.2020.9341604 ]
Jung J H, Cha J, Chung J Y, Kim T I, Seo M H, Park S Y, Yeo J Y and Park C G. 2020. Monocular visual-inertial-wheel odometry using low-grade IMU in urban areas. IEEE Transactions on Intelligent Transportation Systems: #9190061[DOI:10.1109/TITS.2020.3018167]
Kaess M, Johannsson H, Roberts R, Ila V, Leonard J J and Dellaert F. 2012. iSAM2: incremental smoothing and mapping using the Bayes tree. The International Journal of Robotics Research, 31(2): 216-235[DOI:10.1177/0278364911430419]
Kaess M, Ranganathan A and Dellaert F. 2008. iSAM: incremental smoothing and mapping. IEEE Transactions on Robotics, 24(6): 1365-1378[DOI:10.1109/TRO.2008.2006706]
Kaiser J, Martinelli A, Fontana F and Scaramuzza D. 2017. Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation. IEEE Robotics and Automation Letters, 2(1): 18-25[DOI:10.1109/LRA.2016.2521413]
Khattak S, Papachristos C and Alexis K. 2019. Keyframe-based direct thermal-inertial odometry//Proceedings of 2019 International Conference on Robotics and Automation. Montreal, Canada: IEEE: 3563-3569[ DOI: 10.1109/ICRA.2019.8793927 http://dx.doi.org/10.1109/ICRA.2019.8793927 ]
Kneip L, Martinelli A, Weiss S, Scaramuzza D and Siegwart R. 2011. Closed-form solution for absolute scale velocity determination combining inertial measurements and a single feature correspondence//Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE: 4546-4553[ DOI: 10.1109/ICRA.2011.5980127 http://dx.doi.org/10.1109/ICRA.2011.5980127 ]
Lee H, McCrink M and Gregory J W. 2019. Visual-inertial odometry for unmanned aerial vehicle using deep learning//Proceedings of 2019 AIAA Scitech Forum. San Diego, USA: AIAA: #1410[ DOI: 10.2514/6.2019-1410 http://dx.doi.org/10.2514/6.2019-1410 ]
Lee W, Eckenhoff K, Geneva P and Huang G Q. 2020. Intermittent GPS-aided VIO: online initialization and calibration//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Paris, France: IEEE: 5724-5731[ DOI: 10.1109/ICRA40945.2020.9197029 http://dx.doi.org/10.1109/ICRA40945.2020.9197029 ]
Leutenegger S, Lynen S, Bosse M, Siegwart R and Furgale P.2015. Keyframe-based visual-inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 34(3): 314-334[DOI:10.1177/0278364914554813]
Li C S and Waslander S L. 2020. Towards end-to-end learning of visual inertial odometry with an EKF//Proceedings of the 17th Conference on Computer and Robot Vision. Ottawa, Canada: IEEE: 190-197[ DOI: 10.1109/CRV50864.2020.00033 http://dx.doi.org/10.1109/CRV50864.2020.00033 ]
Li J Y, Yang B B, Huang K Zhang G F and Bao H J. 2019. Robust and efficient visual-inertial odometry with multi-plane priors//Proceedings of the 2nd Chinese Conference on Pattern Recognition and Computer Vision. Xi'an, China: Springer: 283-295[ DOI: 10.1007/978-3-030-31726-3_24 http://dx.doi.org/10.1007/978-3-030-31726-3_24 ]
Li M Y and Mourikis A I. 2013. High-precision, consistent EKF-based visual-inertial odometry. The International Journal of Robotics Research, 32(6): 690-711[DOI:10.1177/0278364913481251]
Li X, Li Y Y, Örnek E P, Li J L and Tombari F. 2020. Co-planar parametrization for Stereo-SLAM and visual-inertial odometry. IEEE Robotics and Automation Letters, 5(4): 6972-6979[DOI:10.1109/LRA.2020.3027230]
Ling Y G, Bao L C, Jie Z Q, Zhu F M, Li Z Y, Tang S M, Liu Y S, Liu W and Zhang T. 2018. Modeling varying camera-IMU time offset in optimization-based visual-inertial odometry//Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer: 491-507[ DOI: 10.1007/978-3-030-01240-3_30 http://dx.doi.org/10.1007/978-3-030-01240-3_30 ]
Liu H M, Chen M Y, Zhang G F, Bao H J and Bao Y Z. 2018. ICE-BA: incremental, consistent and efficient bundle adjustment for visual-inertial SLAM//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 1974-1982[ DOI: 10.1109/CVPR.2018.00211 http://dx.doi.org/10.1109/CVPR.2018.00211 ]
Liu J X, Gao W and Hu Z Y. 2020. Optimization-based visual-inertial SLAM tightly coupled with raw GNSS measurements[EB/OL]. [2020-11-30] . https://arxiv.org/pdf/2010.11675.pdf https://arxiv.org/pdf/2010.11675.pdf
Lupton T and Sukkarieh S. 2012. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions. IEEE Transactions on Robotics, 28(1): 61-76[DOI:10.1109/TRO.2011.2170332]
Lynen S, Sattler T, Bosse M, Hesch J, Pollefeys M and Siegwart R. 2015. Get out of my lab: large-scale, real-time visual-inertial localization//Proceedings of Robotics: Science and Systems. Rome, Italy: #37[ DOI: 10.15607/RSS.2015.XI.037 http://dx.doi.org/10.15607/RSS.2015.XI.037 ]
Majdik A L, Till C and Scaramuzza D. 2017. The Zurich urban micro aerial vehicle dataset. The International Journal of Robotics Research, 36(3): 269-273[DOI:10.1177/0278364917702237]
Martinelli A. 2014. Closed-form solution of visual-inertial structure from motion. International Journal of Computer Vision, 106(2): 138-152[DOI:10.1007/s11263-013-0647-7]
Meier K, Chung S J and Hutchinson S. 2016. Visual-inertial curve SLAM//Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon, Korea (South): IEEE: 1238-1245[ DOI: 10.1109/IROS.2016.7759206 http://dx.doi.org/10.1109/IROS.2016.7759206 ]
Meier K, Chung S J and Hutchinson S. 2018. Visual-inertial curve simultaneous localization and mapping: creating a sparse structured world without feature points. Journal of Field Robotics, 35(4): 516-544[DOI:10.1002/rob.21759]
Miller M, Chung S J and Hutchinson S. 2018. The visual-inertial canoe dataset. The International Journal of Robotics Research, 37(1): 13-20[DOI:10.1177/0278364917751842]
Mourikis A I and Roumeliotis S I. 2007. A multi-state constraint Kalman filter for vision-aided inertial navigation//Proceedings of 2007 IEEE International Conference on Robotics and Automation. Rome, Italy: IEEE: 3565-3572[ DOI: 10.1109/ROBOT.2007.364024 http://dx.doi.org/10.1109/ROBOT.2007.364024 ]
Mur-Artal R, Montiel J M M and Tardós J D. 2015. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5): 1147-1163[DOI:10.1109/TRO.2015.2463671]
Mur-Artal R and Tardós J D. 2017a. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33(5): 1255-1262[DOI:10.1109/TRO.2017.2705103]
Mur-Artal R and Tardós J D. 2017b. Visual-inertial monocular SLAM with map reuse. IEEE Robotics and Automation Letters, 2(2): 796-803[DOI:10.1109/LRA.2017.2653359]
Nguyen T, Mann G K I, Vardy A and Gosine R G. 2020a. CKF-based visual inertial odometry for long-term trajectory operations. Journal of Robotics, 2020: #7362952[DOI:10.1155/2020/7362952]
Nguyen T M, Yuan S H, Cao M Q, Lyu Y, Nguyen T H and Xie L H. 2020b. VIRAL-fusion: a visual-inertial-ranging-lidar sensor fusion approach[EB/OL]. [2020-10-23] . https://arxiv.org/pdf/2010.12274.pdf https://arxiv.org/pdf/2010.12274.pdf
Pfrommer B, Sanket N, Daniilidis K and Cleveland J. 2017. Penncosyvio: a challenging visual inertial odometry benchmark//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 3847-3854[ DOI: 10.1109/ICRA.2017.7989443 http://dx.doi.org/10.1109/ICRA.2017.7989443 ]
Qin T, Li P L and Shen S J. 2018. VINS-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4): 1004-1020[DOI:10.1109/TRO.2018.2853729]
Qin T, Pan J, Cao S Z and Shen S J. 2019. A general optimization-based framework for local odometry estimation with multiple sensors[EB/OL]. [2020-11-30] . https://arxiv.org/pdf/1901.03638.pdf https://arxiv.org/pdf/1901.03638.pdf
Qin T and Shen S J. 2018. Online temporal calibration for monocular visual-inertial systems//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 3662-3669[ DOI: 10.1109/IROS.2018.8593603 http://dx.doi.org/10.1109/IROS.2018.8593603 ]
Qiu XC, Zhang H and Fu W X. 2020. Lightweight hybrid visual-inertial odometry with closed-form zero velocity update. Chinese Journal of Aeronautics, 33(12): 3344-3359[DOI:10.1016/j.cja.2020.03.008]
Rehder J, Nikolic J, Schneider T, Hinzmann T and Siegwart R. 2016. Extending kalibr: calibrating the extrinsics of multiple IMUs and of individual axes//Proceedings of 2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE: 4304-4311[ DOI: 10.1109/ICRA.2016.7487628 http://dx.doi.org/10.1109/ICRA.2016.7487628 ]
Rosinol A, Abate M, Chang Y and Carlone L. 2020. Kimera: an open-source library for real-time metric-semantic localization and mapping//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Paris, France: IEEE: 1689-1696[ DOI: 10.1109/ICRA40945.2020.9196885 http://dx.doi.org/10.1109/ICRA40945.2020.9196885 ]
Schneider T, Dymczyk M, Fehr M, Egger K, Lynen S, Gilitschenski I and Siegwart R. 2018. Maplab: an open framework for research in visual-inertial mapping and localization. IEEE Robotics and Automation Letters, 3(3): 1418-1425[DOI:10.1109/LRA.2018.2800113]
Schubert D, Goll T, Demmel N, Usenko V, Stückler J and Cremers D. 2018. The TUM VI benchmark for evaluating visual-inertial odometry//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE: 1680-1687[ DOI: 10.1109/IROS.2018.8593419 http://dx.doi.org/10.1109/IROS.2018.8593419 ]
Shamwell E J, Lindgren K, Leung S and Nothwang W D. 2020. Unsupervised deep visual-inertial odometry with online error correction for RGB-D imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(10): 2478-2493[DOI:10.1109/TPAMI.2019.2909895]
Shan M, Feng Q J and Atanasov N. 2020. OrcVIO: object residual constrained visual-inertial odometry[EB/OL]. [2020-07-29] . https://arxiv.org/pdf/2007.15107.pdf https://arxiv.org/pdf/2007.15107.pdf
Shao W Z, Vijayarangan S, Li C and Kantor G. 2019. Stereo visual inertial lidar simultaneous localization and mapping//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 370-377[ DOI: 10.1109/IROS40897.2019.8968012 http://dx.doi.org/10.1109/IROS40897.2019.8968012 ]
Sun K, Mohta K, Pfrommer B, Watterson M, Liu S K, Mulgaonkar Y, Taylor C J and Kumar V. 2018. Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robotics and Automation Letters, 3(2): 965-972[DOI:10.1109/LRA.2018.2793349]
Surber J, Teixeira L and Chli M. 2017. Robust visual-inertial localization with weak GPS priors for repetitive UAV flights//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore: IEEE: 6300-6306[ DOI: 10.1109/ICRA.2017.7989745 http://dx.doi.org/10.1109/ICRA.2017.7989745 ]
Usenko V, Demmel N, Schubert D, Stückler Jand Cremers D. 2020. Visual-inertial mapping with non-linear factor recovery. IEEE Robotics and Automation Letters, 5(2): 422-429[DOI:10.1109/LRA.2019.2961227]
von Stumberg L, Usenko V and Cremers D. 2018. Direct sparse visual-inertial odometry using dynamic marginalization//Proceedings of 2018 IEEE International Conference on Robotics and Automation. Brisbane, Australia: IEEE: 2510-2517[ DOI: 10.1109/ICRA.2018.8462905 http://dx.doi.org/10.1109/ICRA.2018.8462905 ]
Wu Y T, Zhang L, Chen S Q and Zhang L L. 2019. A survey of advanced visual-inertial navigation system//Proceedings of the Symposium on Inertial Technology and Intelligent Navigation. Kunming, China: Chinese Society of Inertial Technology: 120-127
吴禹彤, 张林, 陈善秋, 张丽玲. 先进视觉惯性导航系统综述//惯性技术与智能导航学术研讨会论文集. 昆明: 中国惯性技术学会: 120-127)[ DOI: 10.26914/c.cnkihy.2019.035982 http://dx.doi.org/10.26914/c.cnkihy.2019.035982 ]
Yang Z F and Shen S J. 2017. Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration. IEEE Transactions on Automation Science and Engineering, 14(1): 39-51[DOI:10.1109/TASE.2016.2550621]
Yu H S and Mourikis A I. 2017. Edge-based visual-inertial odometry//Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vancouver, Canada: IEEE: 6670-6677[ DOI: 10.1109/IROS.2017.8206582 http://dx.doi.org/10.1109/IROS.2017.8206582 ]
Yu Y, Gao W L, Liu C J, Shen S J and Liu M. 2019. A GPS-aided omnidirectional visual-inertial state estimator in ubiquitous environments//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 7750-7755[ DOI: 10.1109/IROS40897.2019.8968519 http://dx.doi.org/10.1109/IROS40897.2019.8968519 ]
Zhang J and Singh S. 2014. LOAM: lidar odometry and mapping in real-time//Proceedings of Robotics: Science and Systems Conference. Berkeley, USA: [s.n.][ DOI: 10.15607/RSS.2014.X.007 http://dx.doi.org/10.15607/RSS.2014.X.007 ]
Zhao S B, Wang P, Zhang H R, Fang Z and Scherer S. 2020a. TP-TIO: a robust thermal-inertial odometry with deep thermalpoint[EB/OL]. [2020-12-07] . https://arxiv.org/pdf/2012.03455.pdf https://arxiv.org/pdf/2012.03455.pdf
Zhao X Y, Wang C H and Ang M H. 2020b. Real-time visual-inertial localization using semantic segmentation towards dynamic environments. IEEE Access, 8: 155047-155059[DOI:10.1109/ACCESS.2020.3018557]
Zou D P, Wu Y X, Pei L, Ling H B and Yu W X. 2019. StructVIO: visual-inertial odometry with structural regularity of man-made environments. IEEE Transactions on Robotics, 35(4): 999-1013[DOI:10.1109/TRO.2019.2915140]
Zuñiga-Noël D, Jaenal A, Gomez-Ojeda R and González-Jiménez J. 2020. The UMA-VI dataset: visual-inertial odometry in low-textured and dynamic illumination environments. The International Journal of Robotics Research, 39(9): 1052-1060[DOI:10.1177/0278364920938439]
Zuo X X, Geneva P, Lee W, Liu Y and Huang G Q. 2019a. LIC-fusion: LiDAR-inertial-camera odometry//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau, China: IEEE: 5848-5854[ DOI: 10.1109/IROS40897.2019.8967746 http://dx.doi.org/10.1109/IROS40897.2019.8967746 ]
Zuo X X, Geneva P, Yang Y L, Ye W L, Liu Y and Huang G Q. 2019b. Visual-inertial localization with prior LiDAR map constraints. IEEE Robotics and Automation Letters, 4(4): 3394-3401[DOI:10.1109/LRA.2019.2927123]
Zuo X X, Yang Y L, Geneva P, Lv J J, Liu Y, Huang G Q and Pollefeys M. 2020. LIC-Fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking[EB/OL]. [2020-08-17] . https://arxiv.org/pdf/2008.07196.pdf https://arxiv.org/pdf/2008.07196.pdf
相关文章
相关作者
相关机构
京公网安备11010802024621