Current Issue Cover

雷印杰1, 徐凯2, 郭裕兰3, 杨鑫4, 武玉伟5, 胡玮6, 杨佳琪7, 汪汉云8(1.四川大学电子信息学院, 成都 610065;2.国防科技大学计算机学院, 长沙 410073;3.国防科技大学电子科学学院, 长沙 410073;4.大连理工大学计算机科学与技术学院, 大连 116081;5.北京理工大学计算机学院, 北京 100081;6.北京大学王选计算机研究所, 北京 100091;7.西北工业大学计算机学院, 西安 710072;8.信息工程大学计算机与大数据学院/软件学院, 郑州 450001)

摘 要
Comprehensive survey on 3D visual-language understanding techniques

Lei Yinjie1, Xu Kai2, Guo Yulan3, Yang Xin4, Wu Yuwei5, Hu Wei6, Yang Jiaqi7, Wang Hanyun8(1.College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China;2.School of Computer Science, National University of Defense Technology, Changsha 410073, China;3.College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China;4.School of Computer Science and Technology, Dalian University of Technology, Dalian 116081, China;5.School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;6.Wangxuan Institute of Computer Technology, Peking University, Beijing 100091, China;7.School of Computer Science, Northwestern Polytechnical University, Xi'an 710072, China;8.College of Computer and Data Science/College of Software, University of Information and Technology, Zhengzhou 450001, China)

The core of 3D visual reasoning is to understand the relationships among different visual entities in point cloud scenes. Traditional 3D visual reasoning typically requires users to possess professional expertise. However,nonprofessional users face difficulty conveying their intentions to computers,which hinders the popularization and advancement of this technology. Users now anticipate a more convenient way to convey their intentions to the computer for achieving information exchange and gaining personalized results. Researchers utilize natural language as a semantic background or query criteria to reflect user intentions for addressing the aforementioned issue. They further accomplish various missions by interacting such natural language with 3D point clouds. By multimodal interaction,often employing techniques such as the Transformer or graph neural network,current approaches not only can locate the entities mentioned by users(e. g. ,visual grounding and open-vocabulary recognition)but also can generate user-required content(e. g. ,dense captioning,visual question answering,and scene generation). Specifically,3D visual grounding is intended to locate desired objects or regions in the 3D point cloud scene based on the object-related linguistic query. Open-vocabulary 3D recognition aims to identify and localize 3D objects of novel classes defined by an unbounded(open)vocabulary at inference,which can generalize beyond the limited number of base classes labeled during the training phase. 3D dense captioning aims to identify all possible instances within the 3D point cloud scene and generate the corresponding natural language description for each instance. The goal of 3D visual question answering is to comprehend an entire 3D scene and provide an appropriate answer. Text-guided scene generation is to synthesize a realistic 3D scene composed of complex background and multiple objects from natural language descriptions. The aforementioned paradigm,which is known as 3D visual-language understanding,has gained significant traction in various fields,such as autonomous driving,robot navigation,and humancomputer interaction,in recent years. Consequently,it has become a highly anticipated research direction within the computer vision domain. Over the past 3 years,3D visual-language understanding technology has rapidly developed and showcased a blossoming trend. However,comprehensive summaries regarding the latest research progress remain lacking. Therefore,the necessary tasks are to systematically summarize recent studies,comprehensively evaluate the performance of different approaches,and prospectively point out future research directions. This situation motivates this survey to fill this gap. For this purpose,this study aims to focus on two of the most representative works of 3D visual-language understanding technologies and systematically summarizes their latest research advancements:anchor box prediction and content generation. First,the study provides an overview of the problem definition and existing challenges in 3D visual-language understanding,and it also outlines some common backbones used in this area. The challenges in 3D visual-language understanding include 3D-language alignment and complex scene understanding. Meanwhile,some common backbones involve priori rules,multilayer perceptrons,graph neural networks,and Transformer architectures. Subsequently,the study delves into downstream scenarios,which emphasize two types of 3D visual-language understanding techniques,including bounding box predation and content generation. This study thoroughly explores the advantages and disadvantages of each method. Furthermore,the study compares and analyzes the performance of various methods on different benchmark datasets. Finally,the study concludes by looking ahead to the future prospects of 3D visual-language reasoning technology, which can promote profound research and widespread application in this field. The major contributions of this study can be summarized as follows:1)Systematic survey of 3D visual-language understanding. To the best of our knowledge,this survey is the first to thoroughly discuss the recent advances in 3D visual-language understanding. We categorize algorithms into different taxonomies from the perspective of downstream scenarios to provide readers with a clear comprehension of our article. 2)Comprehensive performance evaluation and analysis. We compare the existing 3D visual-language understanding approaches on several publicly available datasets. Our in-depth analysis can help researchers in selecting the baseline suitable for their specific applications while also offering valuable insights on the modification of existing methods. 3) Insightful discussion of future prospects. Based on the systematic survey and comprehensive performance comparison, some promising future research directions are discussed,including large-scale 3D foundation model,computational efficiency of 3D modeling,and incorporation of additional modalities.