视觉传感机理与数据处理进展
Progress in mechanism and data processing of visual sensing
- 2020年25卷第1期 页码:19-30
收稿:2019-08-01,
修回:2019-9-16,
录用:2019-9-23,
纸质出版:2020-01-16
DOI: 10.11834/jig.190404
移动端阅览

浏览全部资源
扫码关注微信
收稿:2019-08-01,
修回:2019-9-16,
录用:2019-9-23,
纸质出版:2020-01-16
移动端阅览
传统视觉感知以RGB光学图像和视频图像为主要数据源,借助计算机视觉的发展取得了巨大成功。然而,传统RGB光学成像也存在着光谱、采样速度、测量精度、可工作条件等方面的限制。近年来,视觉感知的新机理和新数据处理技术的迅速发展,为提升感知和认知能力带来了重大机遇;同时,也具有重要的理论价值和重大应用需求。本文围绕激光扫描、水声声呐成像、新体制动态成像、计算成像、位姿感知等研究方向,综述发展现状、前沿动态、热点问题和发展趋势。当前,在视觉传感研究领域,国内研究机构和团队在数据处理和应用方面取得了显著进展。整体上,国内依然要落后于欧美日等先进国家,尤其是在相关硬件的研制方面。最后,给出了发展趋势与展望,以期为相关研究者提供参考。
Traditional visual sensing is based on RGB optical and video imaging data and has achieved great success with the development of computer vision. However
traditional RGB optical imaging has limitations in spectral characterization
sampling effectiveness
measurement accuracy
and operating conditions. The new mechanism of visual sensing and new data processing technology have been developed rapidly recently
bringing considerable opportunities for improving sensing and cognitive capability. The developments are also endowed with important theoretical merits and offer a great chance for major application requirements. This report describes the development status and trends on visual sensing
including laser scanning
sonar
new dynamic imaging system
computational imaging
pose sensing
and other related fields. Researches on laser scanning are increasingly being conducted. In terms of algorithm developments for point cloud data processing
many domestic organizations and teams have reached international synchronization or leading level. Moreover
the application of point cloud data is more extensively shown by Chinese teams. However
at present
several foreign countries still show considerable advantages in hardware equipment
data acquisition
and pre-processing. In terms of event-based (i.e.
dynamic vision sensor
DVS) imaging
domestic teams have focused on target classification
target recognition and tracking
stereo matching
and super resolution
achieving progress and breakthroughs. Hardware design and production technology of DVS are concentrated in foreign research institutes
and almost all these institutes have a research history of about 10 years. Few domestic institutions can independently produce DVS. Generally
although domestic DVS research started relatively late
the development in recent years has been very rapid. Moving target detection and underwater acoustic imaging for small static targets have always been the focus in the field of underwater information technology. Underwater acoustic imaging has the characteristics of military and civil applications. Domestically
high-tech research is mainly supported by civil sectors. For example
synthetic aperture sonar was developed under sustained national support. Substantial breakthroughs
such as in common mechanism
key technologies
and demonstration applications
are difficult to achieve in a short time. Therefore
sustained and stable support guarantees technological breakthroughs and industrialization. Learning-based visual positioning and 3D information processing have made remarkable progress
but many problems remain. In non-cooperative target pose imaging perception
many countries and organizations with advanced technology for space have carried out numerous investigations
and results from some of these endeavors have been successfully applied to space operations in practice. By contrast
visual measurement of non-cooperative targets started late in China. Related programs are under way
such as for rendezvous and docking of space non-cooperative targets and on-orbit service of space robots. However
most of the related investigations remain in the stage of theoretical research and ground experiment verification
and no mature engineering application is available. According to the literature survey
at present
in the field of visual sensing
domestic institutions and teams have made substantial progress in data processing and application. However
lags are observable
especially in the development of related hardware. Laser scanning imaging has a large amount of data and abundant information but lacks semantic information. Research has emerged in the frontiers of unmanned driving
virtual reality
and augmented reality. Wide applications are expected in the future
such as in the minimal description of massive 3D point cloud data and cross-dimensional structure description. DVS has a research history of over 10 years and has progressed in SLAM
tracking
reconfiguration
and other fields. The most evident advantages of DVS are in capturing high-speed moving objects and in high-efficiency and low-cost processing. Moreover
the real-time background filtering function of DVS has great prospects in unmanned driving and trajectory analysis
which will attract much attention for wide applications. The development of small-target detection technology in deep-sea area can be used in deep-sea resource development
protection of marine rights
search and rescue
and military applications. However
inadequacy in the sonar equipment for deep-sea small-target detection seriously restricts applications. Two new system imaging sonars
namely
high-speed imaging sonar based on frequency division multiple-input multiple-output and multi-static imaging sonar
are expected to improve the detection rate and recognition rate for underwater small targets. Robustness is critical for visual positioning and 3D information processing. Intelligent methods can solve the problems of visual positioning and 3D information processing. At present
the pose perception algorithm still shows low efficiency
is imperfect
and requires further investigation. Space operations have prerequisites
including relative pose of space non-cooperative target
reconstruction of 3D structure of target
and recognition of feature parts of target. The model information of the target itself can be totally or partly known. Thus
making full use of the priori information of the target model can greatly help solve the target position. Pose tracking based on 3D model to obtain the initial pose of a target is expected to be a future hotspot. In addition
in the tide of artificial intelligence
how to combine it with pose perception is worthy of exploration. Object position and attitude perception based on vision system are crucial for promoting the development of future space operation
including space close-range operation scenarios (e.g.
target perception
docking
and capture)
small autonomous aircraft
ground intelligent vehicles
and mobile robots. The prospects are given in this paper
which may provide a reference for researchers of related fields.
Aiger D, Mitra N J and Cohen-Or D. 2008. 4-Points congruent sets for robust pairwise surface registration//Proceedings of ACM SIGGRAPH 2008. Los Angeles, California, USA: ACM: #85[ DOI: 10.1145/1399504.1360684 http://dx.doi.org/10.1145/1399504.1360684 ]
Arantes Jr G, Rocco E M, Da FonsecaI M and Theil S. 2010. Far and proximity maneuvers of a constellation of service satellites and autonomous pose estimation of customer satellite using machine vision. Acta Astronautica, 66(9-10):1493-1505[DOI:10.1016/j.actaastro.2009.11.022]
Balado J, Díaz-Vilariño L, Arias P and González-Jorge H. 2018. Automatic classiácation of urban ground elements from mobile laser scanning data. Automation in Construction, 86:226-239[DOI:10.1016/j.autcon.2017.09.004]
Bucksch A and Khoshelham K. 2013. Localized registration of point clouds of botanic trees. IEEE Geoscience and Remote Sensing Letters, 10(3):631-635[DOI:10.1109/LGRS.2012.2216251]
Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, Reid I and Leonard J J. 2016. Past, present, and future of simultaneous localization and mapping:toward the robust-perception age. IEEE Transactions on Robotics, 32(6):1309-1332[DOI:10.1109/TRO.2016.2624754]
Chen X, Kohlmeyer B, Stroila M, Alwar N, Wang R and Bach J. 2009. Next generation map making: geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction//Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. Seattle, Washington, USA: ACM: 488-491[ DOI: 10.1145/1653771.1653851 http://dx.doi.org/10.1145/1653771.1653851 ]
Cheng M, Zhang H C, Wang C and Li J. 2017. Extraction and classification of road markings using mobile laser scanning point clouds. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(3):1182-1196[DOI:10.1109/JSTARS.2016.2606507]
Crivellaro A, Rad M, Verdie Y, Yi K M, Fua P and Lepetit V. 2015. A novel representation of parts for accurate 3D object detection and tracking in monocular images//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 4391-4399[ DOI: 10.1109/ICCV.2015.499 http://dx.doi.org/10.1109/ICCV.2015.499 ]
D'Amico S, Benn M, and Jørgensen J L. 2014. Pose estimation of an uncooperative spacecraft from actual space imagery. International Journal of Space Science and Engineering, 2(2):171-189[DOI:10.1504/IJSPACESE.2014.060600]
Ge X M and Wunderlich T. 2016. Surface-based matching of 3D point clouds with variable coordinates in source and target system. ISPRS Journal of Photogrammetry and Remote Sensing, 111:1-12[DOI:10.1016/j.isprsjprs.2015.11.001]
Guan H Y, Yu Y T, Li J, Ji Z and Zhang Q. 2016. Extraction of power-transmission lines from vehicle-borne lidar data. International Journal of Remote Sensing, 37(1):229-247[DOI:10.1080/01431161.2015.1125549]
Horn B K P. 1987. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, 4(4):629-642[DOI:10.1364/JOSAA.4.000629]
Huang P D, Cheng M, Chen Y P, Luo H, Wang C and Li J. 2017. Traffic sign occlusion detection using mobile laser scanning point clouds. IEEE Transactions on Intelligent Transportation Systems, 18(9):2364-2376[DOI:10.1109/TITS.2016.2639582]
Huang T C, Tao B Y, He Y, Hu S J, Yu J Y, Li Q, Zhu Y F, Yin G Q, Huang H Q, Zhu Q K and Gong F. 2018. Waveform processing methods in domestic airborne Lidar bathymetry system. Acta Agronomica Sinica, 55(8):64-73
黄田程, 陶邦一, 贺岩, 胡善江, 俞家勇, 李强, 朱云峰, 尹国清, 黄海清, 朱乾坤, 龚芳. 2018.国产机载激光雷达测深系统的波形处理方法.激光与光电子学进展, 55(8):64-73[DOI:10.3788/LOP55.082808]
Ikeuchi K, Oishi T, Takamatsu J, Sagawa R, Nakazawa A, Kurazume R, Nishino K, Kamakura M and Okamoto Y. 2007. The Great Buddha Project:digitally archiving, restoring, and analyzing cultural heritage objects. International Journal of Computer Vision, 75(1):189-208[DOI:10.1007/s11263-007-0039-y]
Jiang S, Sheng Y H, Li Y Q, Liu H Y and Dai H Y. 2007. Rapid surface modeling of large strip objects based on vehicle-borne laser scanning. Geo-Information Science, 9(5):19-23, 30
江水, 盛业华, 李永强, 刘会云, 戴华阳. 2007.基于车载激光扫描的带状地物表面快速重建.地球信息科学, 9(5):19-23, 30[DOI:10.3969/j.issn.1560-8999.2007.05.004]
Kanani K, Petit A, Marchand E, Chabot T and Gerber B. 2012. Vision based navigation for debris removal missions//Proceedings of the 63rd International Astronautical Congress. Naples, Italy: HAL: 1-8
Kang Z Z, Li J, Zhang L Q, Zhao Q and Zlatanova S. 2009. Automatic registration of terrestrial laser scanning point clouds using panoramic refiectance images. Sensors, 9(4):2621-2646[DOI:10.3390/s90402621]
Lehtomäki M, Jaakkola A, Hyyppä J, Lampinen J, Kaartinen H and Kukko A. 2016. Object classification and recognition from mobile laser scanning point clouds in a road environment. IEEE Transactions on Geoscience and Remote Sensing, 54(2):1226-1239[DOI:10.1109/TGRS.2015.2476502]
Lepetit V, Moreno-Noguer F, and Fua P. 2009. EP $$n$$ P:an accurate $$O\left( n \right)$$ solution to the P $$n$$ P problem. International Journal of Computer Vision, 81(2):155-166[DOI:10.1007/s11263-008-0152-6].
Lerma J L, Navarro S, Cabrelles M and Villaverde V. 2010. Terrestrial laser scanning and close range photogrammetry for 3D archaeological documentation:the upper Palaeolithic Cave of Parpalló as a case study. Journal of Archaeological Science, 37(3):499-507[DOI:10.1016/j.jas.2009.10.011]
Li Y. 2013. Contour and Edge-based Visual Tracking of Non-cooperative Space Targets. Changsha: National University of Defense Technology http://cdmd.cnki.com.cn/Article/CDMD-90002-1015959293.htm .
李由. 2013.基于轮廓和边缘的空间非合作目标视觉跟踪.长沙: 国防科学技术大学
Li H M, Li G Q and Shi L P. 2016. Classification of spatiotemporal events based on random forest//Proceedings of 2016 International Conference on Brain Inspired Cognitive Systems. Beijing, China: Springer: 138-148[ DOI: 10.1007/978-3-319-49685-6_13 http://dx.doi.org/10.1007/978-3-319-49685-6_13 ]
Li Z Q, Zhang L Q, Zhong R F, Fang T, Zhang T and Zhang Z. 2017a. Classification of urban point clouds:a robust supervised approach with automatically generating training data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(3):1207-1220[DOI:10.1109/JSTARS.2016.2628399]
Li H M, Liu H C, Ji X Y and Shi L. 2017b. CIFAR10-DVS:an event-stream dataset for object classification. Frontiers in Neuroscience, 11:#309[DOI:10.3389/fnins.2017.00309]
Li Y Y, Bu R, Sun M C, Wu W, Di X H and Chen B Q. 2018a. Point CNN: convolution on $$\chi $$ -transformed points[EB/OL]. 2018-11-05[2018-11-10] . https://arxiv.org/pdf/1801.07791.pdf https://arxiv.org/pdf/1801.07791.pdf .
Li H M, Li G Q, Ji X Y and Shi L. 2018b. Deep representation via convolutional neural network for classification of spatiotemporal event streams. Neurocomputing, 299:1-9[DOI:10.1016/j.neucom.2018.02.019]
Li H M, Li G Q, Liu H C and Shi L. 2018c. Super-resolution of spatiotemporal event-stream image captured by the asynchronous temporal contrast vision sensor[EB/OL]. 2018-03-16[2018-10-25] . https://arxiv.org/pdf/1802.02398.pdf https://arxiv.org/pdf/1802.02398.pdf
Li H M and Shi L P. 2018. Robust event-stream pattern tracking based on correlative filter[EB/OL]. 2018-03-17[2018-10-25] . https://arxiv.org/pdf/1803.06490.pdf https://arxiv.org/pdf/1803.06490.pdf
Liang B, He Y, Zou Y and Yang J. 2016. Application of time-of-flight camera for relative measurement of non-cooperative target in close range. Journal of Astronautics, 37(9):1080-1088
梁斌, 何英, 邹瑜, 杨君. 2016. ToF相机在空间非合作目标近距离测量中的应用.宇航学报, 37(9):1080-1088[DOI:10.3873/j.issn.1000-1328.2016.09.007]
Liu J N and Zhang X H. 2003. Progress of airborne laser scanning altimetry. Geomatics and Information Science of Wuhan University, 28(2):132-137
刘经南, 张小红. 2003.激光扫描测高技术的发展与现状.武汉大学学报:信息科学版, 2003, 28(2):132-137[DOI:10.3969/j.issn.1671-8860.2003.02.002]
Liu S J, Chan K C and Wang C C L. 2012. Iterative consolidation of unorganized point clouds. IEEE Computer Graphics and Applications, 32(3):70-83[DOI:10.1109/MCG.2011.14]
Liu J B, Zhang X H, Liu H B and Zhu Z K. 2013. New method for camera pose estimation based on line correspondence. Science China Technological Sciences, 56(11):2787-2797[DOI:10.1007/s11431-013-5361-8]
Lu X S and Huang L. 2007. Grid method on building information extraction using laser scanning data. Geomatics and Information Science of Wuhan University, 32(10):852-855
卢秀山, 黄磊. 2007.基于激光扫描数据的建筑物信息格网化提取方法.武汉大学学报:信息科学版, 32(10):852-855[DOI:10.3969/j.issn.1671-8860.2007.10.002]
Lyu F and Ren K. 2015. Automatic registration of airborne LiDAR point cloud data and optical imagery depth map based on line and points features. Infrared Physics&Technology, 71:457-463[DOI:10.1016/j.infrared.2015.06.006]
Miao X K, Zhu F, Hao Y M, Wu Q X and Xia R B. 2013. Vision pose measurement for non-cooperative space vehicles based on solar panel component. Chinese High Technology Letters, 23(4):400-406
苗锡奎, 朱枫, 郝颖明, 吴清潇, 夏仁波. 2013.基于太阳能帆板部件的空间非合作飞行器视觉位姿测量方法.高技术通讯, 2013, 23(4):400-406[DOI:10.3772/j.issn.1002-0470.2013.04.011]
Moorfield B, Haeusler R and Klette R. 2015. Bilateral filtering of 3d point clouds for refined 3D roadside reconstructions//Proceedings of 2015 International Conference on Computer Analysis of Images and Patterns. Valletta, Malta: Springer: 394-402[ DOI: 10.1007/978-3-319-23117-4_34 http://dx.doi.org/10.1007/978-3-319-23117-4_34 ]
Monserrat O and Crosetto M. 2008. Deformation measurement using terrestrial laser scanning data and least squares 3D surface matching. ISPRS Journal of Photogrammetry and Remote Sensing, 63(1):142-154[DOI:10.1016/j.isprsjprs.2007.07.008]
Mottaghi R, Xiang Y and Savarese S. 2015. A coarse-to-fine model for 3D pose estimation and sub-category recognition//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE: 418-426[ DOI: 10.1109/CVPR.2015.7298639 http://dx.doi.org/10.1109/CVPR.2015.7298639 ]
Murphy M, McGovern E and Pavia S. 2013. Historic building information modelling-Adding intelligence to laser and image based surveys of European classical architecture. ISPRS Journal of Photogrammetry and Remote Sensing, 76:89-102[DOI:10.1016/j.isprsjprs.2012.11.006]
Najafi M, Namin S T, Salzmann M and Petersson L. 2014. Non-associative higher-order Markov networks for point cloud classification//Proceedings of 2014 European Conference on Computer Vision. Zurich, Switzerland: Springer: 500-515[ DOI: 10.1007/978-3-319-10602-1_33 http://dx.doi.org/10.1007/978-3-319-10602-1_33 ]
Nishida S I, Kawamoto S, Okawa Y, Terui F and Kitamura S. 2009. Space debris removal system using a small satellite. Acta Astronautica, 65(1-2):95-102[DOI:10.1016/j.actaastro.2009.01.041]
Opromolla R, Fasano G, Rufino G and Grassi M. 2015. Performance evaluation of 3D model-based techniques for autonomous pose initialization and tracking//Proceedings of AIAA SciTech Forum. Kissimmee, Florida, USA: AIAA: #1426[ DOI: 10.2514/6.2015-1426 http://dx.doi.org/10.2514/6.2015-1426 ]
Orts-Escolano S, Morell V, García-Rodríguez J and Cazorla M. 2013. Point cloud data filtering and downsampling using growing neural gas//Proceedings of the 2013 International Joint Conference on Neural Networks. Dallas, TX, USA: IEEE: 1-8[ DOI: 10.1109/IJCNN.2013.6706719 http://dx.doi.org/10.1109/IJCNN.2013.6706719 ]
Qi C R, Su H, Mo K C and Guibas L J. 2017. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE: 77-85[ DOI: 10.1109/CVPR.2017.16 http://dx.doi.org/10.1109/CVPR.2017.16 ]
Rangel J C, Morell V, Cazorla M, Orts-Escolano S and García-Rodríguez J. 2017. Object recognition in noisy RGB-D data using GNG. Pattern Analysis and Applications, 20(4):1061-1076[DOI:10.1007/s10044-016-0546-y]
Riveiro B, González-Jorge H, Martínez-Sánchez J, Díaz-Vilariño L and Arias P. 2015. Automatic detection of zebra crossings from mobile LiDAR data. Optics&Laser Technology, 70:63-70[DOI:10.1016/j.optlastec.2015.01.011]
Rodríguez-Cuenca B, García-Cortés S, Ordóñez C and Alonso M C. 2015. Automatic detection and classification of pole-like objects in urban point cloud data using an anomaly detection algorithm. Remote Sensing, 7(10):12680-12703[DOI:10.3390/rs71012680]
Sattler T, Torii A, Sivic J, Pollefeys M, Taira H, Okutomi M and Pajdla T. 2017. Are large-scale 3D models really necessary for accurate visual localization?//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE: 6175-6184[ DOI: 10.1109/CVPR.2017.654 http://dx.doi.org/10.1109/CVPR.2017.654 ]
Siebert S and Teizer J. 2014. Mobile 3D mapping for surveying earthwork projects using an unmanned aerial vehicle (UAV) system. Automation in Construction, 41:1-14[DOI:10.1016/j.autcon.2014.01.004]
Su H, Maji S, Kalogerakis E and Learned-Miller E. 2015. Multi-view convolutional neural networks for 3D shape recognition//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 945-953[ DOI: 10.1109/ICCV.2015.114 http://dx.doi.org/10.1109/ICCV.2015.114 ]
Sundermeyer M, Marton Z C, Durner M, Brucker M and Triebel R. 2018. Implicit 3D orientation learning for 6D object detection from RGB images//Proceedings of 2018 European Conference on Computer Vision. Munich, Germany: Springer: 712-729[ DOI: 10.1007/978-3-030-01231-1_43 http://dx.doi.org/10.1007/978-3-030-01231-1_43 ]
Theiler P W, Wegner J D and Schindler K. 2014. Keypoint-based 4-points congruent sets-automated marker-less registration of laser scans. ISPRS Journal of Photogrammetry and Remote Sensing, 96:149-163[DOI:10.1016/j.isprsjprs.2014.06.015]
Wang J, Xu K, Liu L G, Cao J, Liu S and Yu Z. 2013a. Consolidation of low-quality point clouds from outdoor scenes//The Eleventh Eurographics/ACMSIGGRAPH Symposium on Geometry Processing. Genova, Italy: Eurographics Association Aire: 207-216[ DOI: 10.1111/cgf.12187 http://dx.doi.org/10.1111/cgf.12187 ]
Wang J, Yu Z, Zhu W and Cao J. 2013b. Feature-preserving surface reconstruction from unoriented, noisy point data. Computer Graphics Forum, 32(1):164-176[DOI:10.1111/cgf.12006]
Wang H Y, Wang C, Luo H, Li P, ChengM, Wen C and Li J. 2014. Object detection in terrestrial laser scanning point clouds based on Hough forest. IEEE Geoscience and Remote Sensing Letters, 11(10):1807-1811[DOI:10.1109/LGRS.2014.2309965]
Wang H Y, Xu J T, Gao Z Y, Lu C and Yao S. 2016. An event-based neurobiological recognition system with orientation detector for objects in multiple orientations. Frontiers in Neuroscience, 10:#498[DOI:10.3389/fnins.2016.00498]
Weinmann M, Weinmann M, Hinz S and Jutzi B. 2011. Fast and automatic image-based registration of TLS data. ISPRS Journal of Photogrammetry and Remote Sensing, 66(S6):S62-S70[DOI:10.1016/j.isprsjprs.2011.09.010]
White J C, Wulder M A, Vastaranta M, Coops N C, Pitt D and Woods M. 2013. The utility of image-based point clouds for forest inventory:a comparison with airborne laser scanning. Forests, 4(3):518-536[DOI:10.3390/f4030518]
Wohlhart P and Lepetit V. 2015. Learning descriptors for object recognition and 3D pose estimation//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE: 3109-3118[ DOI: 10.1109/CVPR.2015.7298930 http://dx.doi.org/10.1109/CVPR.2015.7298930 ]
Wu Y H, Tang F L and Li H P. 2018. Image-based camera localization:an overview. Visual Computing for Industry, Biomedicine, and Art, 12:#8[DOI:10.1186/s42492-018-0008-z]
Xia J Y. 2012. Researches on Monocular vision based pose measurements for space targets[D]. Changsha: National University of Defense Technology
夏军营. 2012.空间目标的单目视觉位姿测量方法研究[D].长沙: 国防科学技术大学
Xie Z, Zhang J H and Wang P F. 2018. Event-based stereo matching using semiglobal matching. International Journal of Advanced Robotic Systems, 15(1)[DOI:10.1177/1729881417752759]
Xu B, Jiang W S, Shan J, Zhang J and Li L. 2016. Investigation on the weighted RANSAC approaches for building roof plane segmentation from LiDAR point clouds. Remote Sensing, 8(1):#5[DOI:10.3390/rs8010005]
Xu W F, Liu Y, Liang B, Li C and Qiang W Y. 2009. Measurement of relative poses between two non-cooperative spacecrafts. Optics and Precision Engineering, 17(7):1570-1581
徐文福,刘宇, 梁斌, 李成, 强文义. 2009.非合作航天器的相对位姿测量.光学精密工程, 17(7):1570-1581[DOI:10.3321/j.issn:1004-924X.2009.07.012]
Yang B S and Dong Z. 2013. A shape-based segmentation method for mobile laser scanning point clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 81:19-30[DOI:10.1016/j.isprsjprs.2013.04.002]
Yu F, Xiao J X and Funkhouser T. 2015. Semantic alignment of LiDAR data at city scale//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE: 1722-1731[ DOI: 10.1109/CVPR.2015.7298781 http://dx.doi.org/10.1109/CVPR.2015.7298781 ]
Yu Y T, Li J, Wen C L, Guan H, Luo H and Wang C. 2016. Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing, 113:106-123[DOI:10.1016/j.isprsjprs.2016.01.005]
Zai D W, Li J, Guo Y L, Cheng M, Lin Y, Luo H and Wang C. 2018. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts. IEEE Transactions on Intelligent Transportation Systems, 19(3):802-813[DOI:10.1109/TITS.2017.2701403]
Zhao H J and Shibasaki R. 2005. Updating a digital geographic database using vehicle-borne laser scanners and line cameras. Photogrammetric Engineering&Remote Sensing, 71(4):415-424[DOI:10.14358/PERS.71.4.415]
Zhang S J, Cao X B and Chen M. 2006. Monocular vision-based relative pose parameters determination for non-cooperative spacecrafts. Journal of Nanjing University of Science and Technology, 30(5):564-568
张世杰, 曹喜滨, 陈闽. 2006.非合作航天器间相对位姿的单目视觉确定算法.南京理工大学学报, 30(5):564-568[DOI:10.3969/j.issn.1005-9830.2006.05.008]
Zhang Y Q. 2016. Research on Vision Based Pose Measurement Methods for Space Uncooperative Objects Using Line Features. Changsha: National University of Defense Technology http://cdmd.cnki.com.cn/Article/CDMD-90002-1017834273.htm .
张跃强. 2016.基于直线特征的空间非合作目标位姿视觉测量方法研究.长沙: 国防科学技术大学, 2016
相关文章
相关作者
相关机构
京公网安备11010802024621