胡世宇1, 赵鑫2, 黄凯奇2(1.中国科学院大学人工智能学院;2.中国科学院自动化研究所智能系统与工程研究中心)
Visual intelligence evaluation techniques for single object tracking: a survey
Hu Shiyu, Zhao Xin1,2, Huang Kaiqi1,2(1.Center for Research on Intelligent System and Engineering,Institute of Automation,Chinese Academy of Sciences;2.China)
Single object tracking (SOT) task, which aims to model the human dynamic vision system and accomplish human-like object tracking ability in complex environments, has been widely used in various real-world applications like self-driving, video surveillance, and robot vision. Over the past decade, thanks to the boom in deep learning, many research groups have worked on designing different tracking frameworks like correlation filter (CF) and Siamese neural networks (SNN), which do facilitate the development of SOT research. However, many factors (e.g., target deformation, fast motion, and illumination changes) in natural application scenes still challenge the SOT trackers. Thus, algorithms with novel architectures have been proposed for robust tracking and to achieve better performance in representative experimental environments. Nevertheless, several bad cases in natural application environments reveal that there is still a big gap between state-of-the-art trackers" performance and human expectations, which motivates us to pay more attention to the evaluation aspects. Therefore, instead of the traditional reviews that mainly concentrate on algorithm design, this paper systematically reviews the visual intelligence evaluation techniques for SOT, including four key aspects: (1) the task definition, (2) experimental environments, (3) task executors, and (4) evaluation mechanisms. Firstly, we present the development direction of task definition, which includes the original short-term tracking, long-term tracking, and the recently proposed global instance tracking. With the evolution of the SOT definition, research has shown a progression from perceptual to cognitive intelligence. Besides, we have summarized challenging factors in the SOT task, hoping to help readers understand the research bottlenecks in actual applications. Secondly, we compare the representative experimental environments in SOT evaluation. Unlike existing reviews that mainly introduce datasets based on chronological order, this paper divides the environments into three categories (i.e., general datasets, dedicated datasets, and competition datasets) and introduces them separately. Thirdly, we introduce the executors of SOT tasks, which not only include tracking algorithms represented by traditional trackers, CF-based trackers, SNN-based trackers, and transformer-based trackers, but also contain human visual tracking experiments conducted in interdisciplinary fields. To our knowledge, none of the existing SOT reviews have included related work about human dynamic visual ability. Therefore, introducing interdisciplinary works can also support the visual intelligence evaluation by comparing machines with humans, and better reveal the intelligence degree of existing algorithm modeling methods. Fourthly, we review the evaluation mechanism and metrics, covering traditional machine-machine and novel human-machine comparisons, and analyze the target tracking capability of various task executors. Besides, we also provide an overview of the human-machine comparison named visual Turing test, including its application in many vision tasks (e.g., image comprehension, game navigation, image classification, and image recognition). Especially, we hope that this paper can help researchers focus on this novel evaluation technique, better understand the capability bottlenecks, further explore the gaps between existing methods and humans, and finally achieve the goal of algorithmic intelligence. Finally, we also indicate the evolution trend of visual intelligence evaluation techniques: (1) designing more human-like task definitions, (2) constructing more comprehensive and realistic experimental environments, (3) including human subjects as task executors, and (4) using human abilities as a baseline to evaluate machine intelligence. In conclusion, this paper summarizes the evolution trend of visual intelligence evaluation techniques for SOT task, further analyzes the existing challenge factors, and discusses the possible future research directions.