深度学习刚性点云配准前沿进展
Review on deep learning rigid point cloud registration
- 2022年27卷第2期 页码:329-348
纸质出版日期: 2022-02-16 ,
录用日期: 2021-10-08
DOI: 10.11834/jig.210556
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-02-16 ,
录用日期: 2021-10-08
移动端阅览
秦红星, 刘镇涛, 谭博元. 深度学习刚性点云配准前沿进展[J]. 中国图象图形学报, 2022,27(2):329-348.
Hongxing Qin, Zhentao Liu, Boyuan Tan. Review on deep learning rigid point cloud registration[J]. Journal of Image and Graphics, 2022,27(2):329-348.
随着3维采集设备的日渐推广,点云配准在越来越多的领域得到应用。然而,传统方法在低重叠、大量噪声、多异常点和大场景等方面表现不佳,这限制了点云配准在真实场景中的应用。面对传统方法的局限性,结合深度学习技术的点云配准方法开始出现,本文将这种方法称为深度点云配准,并对深度点云配准方法研究进展予以综述。首先,根据有无对应关系对目前的深度学习点云配准方法进行区分,分为无对应关系配准和基于对应关系的点云配准。针对基于对应关系的配准,根据各类方法的主要功能进行详细的分类与总结,其中包括几何特征提取、关键点检测、点对离群值去除、姿态估计和端到端配准,并重点介绍了最新出现的一些方法;针对无对应配准方法,详细介绍了各类方法的特点并对无对应与有对应方法的特点进行了总结。在性能评估中,首先对现有主要的评价指标进行了详细的分类与总结,给出其适用场景。对于真实数据集,给出了特征匹配、点对离群值去除的对比数据,并进行了总结。在合成数据集中,给出了相关方法在部分重叠、实时性和全局配准场景下的对比数据。最后讨论了当前深度点云配准面临的挑战并给出对未来研究方向的展望。
A sharp increase in point cloud data past decade
which has facilitated to point cloud data processing algorithms. Point cloud registration is the process of converting point cloud data in two or more camera coordinate systems to the world coordinate system to complete the stitching process. In respect of 3D reconstruction
scanning equipment is used to obtain partial information of the scene in common
and the whole scene is reconstructed based on point cloud registration. In respect of high-precision map and positioning
the local point clouds fragments obtained in driving vehicle are registered to the scene map in advance to complete the high-precision positioning of the vehicle. In addition
point cloud registration is also widely used in pose estimation
robotics
medical and other fields. In the real-world point cloud data collection process
there are a lot of noise
abnormal points and low overlap
which brings great challenges to traditional methods. Currently
deep learning has been widely used in the field of point cloud registration and has achieved remarkable results. In order to solve the limitations of traditional methods
some researchers have developed some point cloud registration methods integrated with deep learning technology
which is called deep point cloud registration. First of all
this analysis distinguishes the current deep learning point cloud registration methods according to the presence or absence of correspondence
which is divided into correspondence-free registration and point cloud registration based on correspondence. The main functions of various methods are classified as follows: 1) geometric feature extraction; 2) key point detection; 3) outlier removal; 4)pose estimation; and 5) end-to-end registration. The geometric feature extraction module aims to learn the coding method of the local geometric structure of the point cloud to generate discriminative features based on the network. Key point detection is used to detect points that are essential to the registration task in a large number of input points
and eliminate potential outliers while reducing computational complexity. Point-to-outliers are the final checking step before estimating the motion parameters to ensure the accuracy and efficiency of the solution. In the correspondence-free point cloud registration
a network structure similar to PointNet is used to obtain the global features of the perceived point cloud pose
and the rigid transformation parameters are estimated from the global features. In the performance of evaluation
the feature matching and registration error performance evaluation indicators are illustrated in detail. Feature matching performance metrics mainly include inlier ratio(IR) and feature matching recall(FMR). Registration error performance metrics include root mean square error(RMSE)
mean square error(MSE)
and mean Absolute error(MAE)
relative translation error(RTE)
relative rotation error(RRE)
chamfer distance(CD) and registration recall(RR). RMSE
MSE and MAE are the most widely used metrics
but they have the disadvantage of Anisotropic. Isotropic RRE and RTE are indicators that actually measure the differences amongst the angle
the translation distance. The above five metrics all have inequal penalties for the registration of axisymmetric point clouds
and CD is the most fair metric. Meanwhile
real data sets registration tends to focus on the success rate of registration. With respect of real data sets
this research provides comparative data for feature matching and outlier removal. In the synthetic data set
this demonstration presents the comparative data of related methods in partial overlap
realtime
and global registration scenarios. At the end
the future research is from the current challenges in this field. 1) The application scenarios faced by point cloud registration are diverse
and it is difficult to develop general algorithms. Therefore
lightweight and efficient dedicated modules are more popular. 2) By dividing the overlap area
partial overlap can be converted into no overlap problem. This method is expected to lift the restrictions on the overlap rate requirements and fundamentally solve the problem of overlapping point cloud registration
so it has greater application value and prospects. 3) Most mainstream methods use multilayer perceptrons(MLPs) to learn saliency from data. 4) Some researchers introduced the random sample consensus(RANSAC) algorithm idea into the neural network
and achieved advanced results
but also led to higher complexity. Therefore
the balance performance and complexity is an issue to be considered in this sub-field. 5) The correspondence-free registration method is based on learning global features related to poses. The global features extracted by existing methods are more sensitive to noise and partial overlap
which is mainly caused by the fusion of some messy information in the global features. Meanwhile
the correspondence-free method has not been widely used in real data
and its robustness is still questioned by some researchers. Robust extraction of the global features for pose perception is also one of the main research issues further.
点云配准深度学习无对应配准端到端配准对应关系几何特征提取离群值去除综述
point cloud registrationdeep learningregistration without correspondingend-to-end registrationcorrespondencegeometric feature extractionoutliers removalreview
Aiger D, Mitra N J and Cohen-Or D. 2008. 4-points congruent sets for robust pairwise surface registration. ACM Transactions on Graphics, 27(3): 1-10[DOI:10.1145/1360612.1360684]
Ao S, Hu Q Y, Yang B, Markham A and Guo Y L. 2021. SpinNet: learning a general surface descriptor for 3D point cloud registration//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 11748-11757[DOI: 10.1109/CVPR46437.2021.01158http://dx.doi.org/10.1109/CVPR46437.2021.01158]
Aoki Y, Goforth H, Srivatsan R A and Lucey S. 2019. PointNetLK: robust and efficient point cloud registration using PointNet//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 7156-7165[DOI: 10.1109/cvpr.2019.00733http://dx.doi.org/10.1109/cvpr.2019.00733]
Bai X Y, Luo Z X, Zhou L, Chen H K, Li L, Hu Z Y, Fu H B and Tai C L. 2021. PointDSC: robust point cloud registration using deep spatial consistency//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 15854-15864[DOI: 10.1109/CVPR46437.2021.01560http://dx.doi.org/10.1109/CVPR46437.2021.01560]
Bai X Y, Luo Z X, Zhou L, Fu H B, Quan L and Tai C L. 2020. D3Feat: joint learning of dense detection and description of 3D local features//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 6358-6366[DOI: 10.1109/CVPR42600.2020.00639http://dx.doi.org/10.1109/CVPR42600.2020.00639]
Bello S A, Yu S S, Wang C, Adam J M and Li J. 2020. Review: deep learning on 3D point clouds. Remote Sensing, 12(11): #1729[DOI:10.3390/rs12111729]
Besl P J and McKay N D. 1992. Method for registration of 3-D shapes//Proceedings of SPIE 1611, Sensor Fusion IV: Control Paradigms and Data Structures. Boston, USA: SPIE: 586-606[DOI: 10.1117/12.57955http://dx.doi.org/10.1117/12.57955]
Biber P and Strasser W. 2003. The normal distributions transform: a new approach to laser scan matching//Proceedings of 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003). Las Vegas, USA: IEEE: 2743-2748[DOI: 10.1109/iros.2003.1249285http://dx.doi.org/10.1109/iros.2003.1249285]
Cheng L, Chen S, Liu X Q, Xu H, Wu Y, Li M C and Chen Y M. 2018. Registration of laser scanning point clouds: a review. Sensors, 18(5): 1641[DOI:10.3390/s18051641]
Choy C, Dong W and Koltun V. 2020. Deep global registration//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 2511-2520[DOI: 10.1109/CVPR42600.2020.00259http://dx.doi.org/10.1109/CVPR42600.2020.00259]
Choy C, Gwak J and Savarese S. 2019a. 4D spatio-temporal ConvNets: minkowski convolutional neural networks//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE: 3070-3079[DOI: 10.1109/CVPR.2019.00319http://dx.doi.org/10.1109/CVPR.2019.00319]
Choy C, Park J and Koltun V. 2019b. Fully convolutional geometric features//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea(South): IEEE: 8957-8965[DOI: 10.1109/ICCV.2019.00905http://dx.doi.org/10.1109/ICCV.2019.00905]
Deng H W, Birdal T and Ilic S. 2018a. PPF-FoldNet: unsupervised learning of rotation invariant 3D local descriptors//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 620-638[DOI: 10.1007/978-3-030-01228-1_37http://dx.doi.org/10.1007/978-3-030-01228-1_37]
Deng H W, Birdal T and Ilic S. 2018b. PPFNet: global context aware local features for robust 3D point matching//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 195-205[DOI: 10.1109/CVPR.2018.00028http://dx.doi.org/10.1109/CVPR.2018.00028]
Deng H W, Birdal T and Ilic S. 2019. 3D local features for direct pairwise registration//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 3239-3248[DOI: 10.1109/CVPR.2019.00336http://dx.doi.org/10.1109/CVPR.2019.00336]
Fischler M A and Bolles R C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): 381-395[DOI:10.1145/358669.358692]
Frome A, Huber D, Kolluri R, Bülow T and Malik J. 2004. Recognizing objects in range data using regional point descriptors//Proceedings of the 8th European Conference on Computer Vision. Prague, Czech Republic: Springer: 224-237[DOI: 10.1007/978-3-540-24672-5_18http://dx.doi.org/10.1007/978-3-540-24672-5_18]
Ginzburg D and Raviv D. 2021. Deep weighted consensus (DWC): dense correspondence confidence maps for 3D shape registration[EB/OL]. [2021-05-06].https://arxiv.org/pdf/2105.02714.pdfhttps://arxiv.org/pdf/2105.02714.pdf
Gojcic Z, Zhou C F, Wegner J D and Wieser A. 2019. The perfect match: 3D point cloud matching with smoothed densities//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 5540-5549[DOI: 10.1109/CVPR.2019.00569http://dx.doi.org/10.1109/CVPR.2019.00569]
Gold S, Rangarajan A, Lu C P, Pappu S and Mjolsness E. 1998. New algorithms for 2D and 3D point matching: pose estimation and correspondence. Pattern Recognition, 31(8): 1019-1031[DOI:10.1016/S0031-3203(98)80010-1]
Groβ J, Ošep A and Leibe B. 2019. AlignNet-3D: fast point cloud registration of partially observed objects//Proceedings of 2019 International Conference on 3D Vision (3DV). Quebec City, Canada: IEEE: 623-632[DOI: 10.1109/3DV.2019.00074http://dx.doi.org/10.1109/3DV.2019.00074]
Guo Y L, Wang H Y, Hu Q Y, Liu H, Liu L and Bennamoun M. 2021. Deep learning for 3D point clouds: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12): 4338-4364[DOI:10.1109/TPAMI.2020.3005434]
Held D, Thrun S and Savarese S. 2016. Learning to track at 100 FPS with deep regression networks//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 749-765[DOI: 10.1007/978-3-319-46448-0_45http://dx.doi.org/10.1007/978-3-319-46448-0_45]
Horache S, Deschaud J E and Goulette F. 2021. 3D point cloud registration with multi-scale architecture and unsupervised transfer learning[EB/OL]. [2021-03-26].https://arxiv.org/pdf/2103.14533.pdfhttps://arxiv.org/pdf/2103.14533.pdf
Huang S Y, Gojcic Z, Usvyatsov M, Wieser A and Schindler K. 2021a. PREDATOR: registration of 3D point clouds with low overlap//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 4265-4274[DOI: 10.1109/CVPR46437.2021.00425http://dx.doi.org/10.1109/CVPR46437.2021.00425]
Huang X S, Mei G F and Zhang J. 2020. Feature-metric registration: a fast semi-supervised approach for robust point cloud registration without correspondences//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 11363-11371[DOI: 10.1109/cvpr42600.2020.01138http://dx.doi.org/10.1109/cvpr42600.2020.01138]
Huang X S, Mei G F, Zhang J and Abbas R. 2021b. A comprehensive survey on point cloud registration[EB/OL]. [2021-03-03].https://arxiv.org/pdf/2103.02690.pdfhttps://arxiv.org/pdf/2103.02690.pdf
Jang E, Gu S X and Poole B. 2017. Categorical reparameterization with gumbel-softmax[EB/OL]. [2021-03-03].https://arxiv.org/pdf/1611.01144.pdfhttps://arxiv.org/pdf/1611.01144.pdf
Johnson A E and Hebert M. 1999. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5): 433-449[DOI:10.1109/34.765655]
Joung S, Kim S, Kim H, Kim M, Kim I J, Cho J and Sohn K. 2020. Cylindrical convolutional networks for joint object detection and viewpoint estimation//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 14151-14160[DOI: 10.1109/CVPR42600.2020.01417http://dx.doi.org/10.1109/CVPR42600.2020.01417]
Khoury M, Zhou Q Y and Koltun V. 2017. Learning compact geometric features//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 153-161[DOI: 10.1109/ICCV.2017.26http://dx.doi.org/10.1109/ICCV.2017.26]
Li J H, Zhang C H, Xu Z Y, Zhou H N and Zhang C. 2020. Iterative distance-aware similarity matrix convolution with mutual-supervised point elimination for efficient point cloud registration//Proceedings of the 16th European Conference on Computer Vision. Glasgow, United Kingdom: Springer: 378-394[DOI: 10.1007/978-3-030-58586-0_23http://dx.doi.org/10.1007/978-3-030-58586-0_23]
Long X X, Cheng X J, Zhu H, Zhang P J, Liu H M, Li J, Zheng L T, Hu Q Y, Liu H, Cao X, Yang R G, Wu Y H, Zhang G F, Liu Y B, Xu K, Guo Y L and Chen B Q. 2021. Recent progress in 3D vision. Journal of Image and Graphics, 26(6): 1389-1428
龙霄潇, 程新景, 朱昊, 张朋举, 刘浩敏, 李俊, 郑林涛, 胡庆拥, 刘浩, 曹汛, 杨睿刚, 吴毅红, 章国锋, 刘烨斌, 徐凯, 郭裕兰, 陈宝权. 2021. 三维视觉前沿进展. 中国图象图形学报, 26(6): 1389-1428[DOI:10.11834/jig.210043]
Lu F, Chen G, Liu Y L, Qu Z N and Knoll A. 2020. RSKDD-Net: random sample-based keypoint detector and descriptor//Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS 2020). Thiele, Lothar: Curran Associates Inc. : 21297-21308[DOI: 20.500.11850/481162http://dx.doi.org/20.500.11850/481162]
Lu W X, Wan G W, Zhou Y, Fu X Y, Yuan P F and Song S Y. 2019. DeepVCP: an end-to-end deep neural network for point cloud registration//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea(South): IEEE: 12-21[DOI: 10.1109/ICCV.2019.00010http://dx.doi.org/10.1109/ICCV.2019.00010]
Pais G D, Ramalingam S, Govindu V M, Nascimento J C, Chellappa R and Miraldo P. 2020. 3DRegNet: a deep neural network for 3D point registration//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 7191-7201[DOI: 10.1109/CVPR42600.2020.00722http://dx.doi.org/10.1109/CVPR42600.2020.00722]
Papadopoulo T and Lourakis M I A. 2000. Estimating the jacobian of the singular value decomposition: theory and applications//Proceedings of the 6th European Conference on Computer Vision. Ireland: Springer: 554-570[DOI: 10.1007/3-540-45054-8_36http://dx.doi.org/10.1007/3-540-45054-8_36]
Qi C R, Su H, Mo K and Guibas L J. 2017a. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 77-85[DOI: 10.1109/CVPR.2017.16http://dx.doi.org/10.1109/CVPR.2017.16]
Qi C R, Yi L, Su H and Guibas L J. 2017b. PointNet++: deep hierarchical feature learning on point sets in a metric space//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc: 5105-5114
Rusu R B, Blodow N and Beetz M. 2009. Fast point feature histograms (FPFH) for 3D registration//Proceedings of 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan: IEEE: 3212-3217[DOI: 10.1109/ROBOT.2009.5152473http://dx.doi.org/10.1109/ROBOT.2009.5152473]
Rusu R B, Blodow N, Marton Z C and Beetz M. 2008. Aligning point cloud views using persistent feature histograms//Proceedings of 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. Nice, France: IEEE: 3384-3391[DOI: 10.1109/IROS.2008.4650967http://dx.doi.org/10.1109/IROS.2008.4650967]
Saiti E and Theoharis T. 2020. An application independent review of multimodal 3D registration methods. Computers and Graphics, 91: 153-178[DOI:10.1016/j.cag.2020.07.012]
Salti S, Tombari F and Di Stefano L. 2014. SHOT: unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding, 125: 251-264[DOI:10.1016/j.cviu.2014.04.011]
Sarlin P E, DeTone D, Malisiewicz T and Rabinovich A. 2020. SuperGlue: learning feature matching with graph neural networks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 4937-4946[DOI: 10.1109/CVPR42600.2020.00499http://dx.doi.org/10.1109/CVPR42600.2020.00499]
Sarode V, Dhagat A, Srivatsan R A, Zevallos N, Lucey S and Choset H. 2020. MaskNet: a fully-convolutional network to estimate inlier points//Proceedings of 2020 International Conference on 3D Vision (3DV). Fukuoka, Japan: IEEE: 1029-1038[DOI: 10.1109/3DV50981.2020.00113http://dx.doi.org/10.1109/3DV50981.2020.00113]
Sarode V, Li X Q, Goforth H, Aoki Y, Srivatsan R A, Lucey S and Choset H. 2019.PCRNet: point cloud registration network using PointNet encoding[EB/OL]. [2021-03-03].https://arxiv.org/pdf/1908.07906.pdfhttps://arxiv.org/pdf/1908.07906.pdf
Shotton J, Glocker B, Zach C, Izadi S, Criminisi A and Fitzgibbon A. 2013. Scene coordinate regression forests for camera relocalization in RGB-D images//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 2930-2937[DOI: 10.1109/CVPR.2013.377http://dx.doi.org/10.1109/CVPR.2013.377]
Thomas H, Qi C R, Deschaud J E, Marcotegui B, Goulette F and Guibas L. 2019. KPConv: flexible and deformable convolution for point clouds//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea(South): IEEE: 6410-6419[DOI: 10.1109/ICCV.2019.00651http://dx.doi.org/10.1109/ICCV.2019.00651]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser Ł and Polosukhin I. 2017. Attention is all you need//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc. : 6000-6010
Wang X L, Girshick R, Gupta A and He K M. 2018. Non-local neural networks//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 7794-7803[DOI: 10.1109/CVPR.2018.00813http://dx.doi.org/10.1109/CVPR.2018.00813]
Wang Y and Solomon J M. 2019a. Deep closest point: learning representations for point cloud registration//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea(South): IEEE: 3522-3531[DOI: 10.1109/ICCV.2019.00362http://dx.doi.org/10.1109/ICCV.2019.00362]
Wang Y and Solomon J M. 2019b. PRNet: self-supervised learning for partial-to-partial registration//Proceedings of the 33rd Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc. : 8814-8826
Wang Y, Sun Y B, Liu Z W, Sarma S E, Bronstein M M and Solomon J M. 2019. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 38(5): #146[DOI:10.1145/3326362]
Wu Z R, Song S R, Khosla A, Yu F, Zhang L G, Tang X O and Xiao J X. 2015. 3D ShapeNets: a deep representation for volumetric shapes//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 1912-1920[DOI: 10.1109/CVPR.2015.7298801http://dx.doi.org/10.1109/CVPR.2015.7298801]
Xiao J X, Owens A and Torralba A. 2013. SUN3D: a database of big spaces reconstructed using SfM and object labels//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE: 1625-1632[DOI: 10.1109/ICCV.2013.458http://dx.doi.org/10.1109/ICCV.2013.458]
Xu H, Liu S C, Wang G F, Liu G H and Zeng B. 2021a. OMNet: learning overlapping mask for partial-to-partial point cloud registration[EB/OL]. [2021-03-03].https://arxiv.org/pdf/2103.00937.pdfhttps://arxiv.org/pdf/2103.00937.pdf
Xu M T, Ding R Y, Zhao H S and Qi X J. 2021b. PAConv: position adaptive convolution with dynamic kernel assembling on point clouds//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 3172-3181[DOI: 10.1109/CVPR46437.2021.00319http://dx.doi.org/10.1109/CVPR46437.2021.00319]
Yang J Q, Chen J H, Huang Z Q, Quan S W, Zhang Y N and Cao Z G. 2020. 3D correspondence grouping with compatibility features[EB/OL]. [2021-03-03].https://arxiv.org/pdf/2007.10570.pdfhttps://arxiv.org/pdf/2007.10570.pdf
Yang Y Q, Feng C, Shen Y R and Tian D. 2018. FoldingNet: point cloud auto-encoder via deep grid deformation//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 206-215[DOI: 10.1109/cvpr.2018.00029http://dx.doi.org/10.1109/cvpr.2018.00029]
Yew Z J and Lee G H. 2018. 3DFeat-Net: weakly supervised local 3D features for point cloud registration//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: 630-646[DOI: 10.1007/978-3-030-01267-0_37http://dx.doi.org/10.1007/978-3-030-01267-0_37]
Yew Z J and Lee G H. 2020. RPM-Net: robust point matching using learned features//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 11824-11833[DOI: 10.1109/CVPR42600.2020.01184http://dx.doi.org/10.1109/CVPR42600.2020.01184]
Yuan W T, Eckart B, Kim K, Jampani V, Fox D and Kautz J. 2020. DeepGMR: learning latent Gaussian mixture models for registration//Proceedings of the 16th European Conference on Computer Vision. Glasgow, United Kingdom: Springer: 733-750[DOI: 10.1007/978-3-030-58558-7_43http://dx.doi.org/10.1007/978-3-030-58558-7_43]
Zeng A, Song S R, Nieβner M, Fisher M, Xiao J X and Funkhouser T. 2017. 3DMatch: learning local geometric descriptors from RGB-D reconstructions//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 199-208[DOI: 10.1109/CVPR.2017.29http://dx.doi.org/10.1109/CVPR.2017.29]
Zhang Z Y, Dai Y C and Sun J D. 2020. Deep learning based point cloud registration: an overview. Virtual Reality and Intelligent Hardware, 2(3): 222-246[DOI:10.1016/j.vrih.2020.05.002]
相关作者
相关机构