Current Issue Cover
利用时空特征编码的单目标跟踪网络

王蒙蒙, 杨小倩, 刘勇(浙江大学控制科学与工程学院, 杭州 310027)

摘 要
目的 随着深度神经网络的出现,视觉跟踪快速发展,视觉跟踪任务中的视频时空特性,尤其是时序外观一致性(temporal appearance consistency)具有巨大探索空间。本文提出一种新颖简单实用的跟踪算法——时间感知网络(temporal-aware network, TAN),从视频角度出发,对序列的时间特征和空间特征同时编码。方法 TAN内部嵌入了一个新的时间聚合模块(temporal aggregation module, TAM)用来交换和融合多个历史帧的信息,无需任何模型更新策略也能适应目标的外观变化,如形变、旋转等。为了构建简单实用的跟踪算法框架,设计了一种目标估计策略,通过检测目标的4个角点,由对角构成两组候选框,结合目标框选择策略确定最终目标位置,能够有效应对遮挡等困难。通过离线训练,在没有任何模型更新的情况下,本文提出的跟踪器TAN通过完全前向推理(fully feed-forward)实现跟踪。结果 在OTB(online object tracking: a benchmark)50、OTB100、TrackingNet、LaSOT(a high-quality benchmark for large-scale single object tracking)和UAV(a benchmark and simulator for UAV tracking)123公开数据集上的效果达到了小网络模型的领先水平,并且同时保持高速处理速度(70帧/s)。与多个目前先进的跟踪器对比,TAN在性能和速度上达到了很好的平衡,即使部分跟踪器使用了复杂的模板更新策略或在线更新机制,TAN仍表现出优越的性能。消融实验进一步验证了提出的各个模块的有效性。结论 本文提出的跟踪器完全离线训练,前向推理不需任何在线模型更新策略,能够适应目标的外观变化,相比其他轻量级的跟踪器,具有更优的性能。
关键词
A spatio-temporal encoded network for single object tracking

Wang Mengmeng, Yang Xiaoqian, Liu Yong(College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China)

Abstract
Objective Current visual tracking has been developed dramatically based on deep neural networks.The task of single object visual tracking aims at tracking random objects in a sequential video streaming through yielding the bounding box of the object bounding box in the initial frame.It can be an essential task for multiple computer vision applications like surveillance systems,robotics and human-computer interaction.The simplified,small scale,easy-used and feed-forward trackers are preferred due to resources-constrained application scenarios.Most of methods are focused on top performance.Instead,we break this paradox from another perspective through the key temporal cues modeling inside our model and the ignorance of model update process and large-scaled models.The intrinsic qualities are required to be developed in the research community (i.e.,video streaming).A video analysis task is beneficial to formulate the basis of the tracking task itself.First,the object has a spatial displacement constraint,which means the object locations in adjacent frames will not be widely ranged unless dramatic camera motions happened.Almost visual trackers are followed this path and a new framed search objects is overlapped starting from the location in the last frame.Next,the potential temporal appearance consistency problem,which indicates the target information from preceding frames changes smoothly.This could be regarded as the temporal context,which can provide clear cues for the following predictions.However,the second feature has not been fully explored in the literature.Existing methods is leveraged temporal appearance consistency in two ways as mentioned below:1) use the target information only in the first frame by modeling visual tracking as a matching problem between the given initial patch and the follow-up frames.Siamese-network-based methods are the most popular and effective methods in this category.They applied a one-shot learning scheme for visual tracking,where the object patch in the first frame is treated as an exemplar,and the patches in the search regions within the consecutive frames are regarded as the candidate instances.The task is transferred to find the most similar instance from each frame.This paradigm ignores other historical frames completely,deals with each frame independently and causes tremendous information loss.2) Use the given initial patch and the historical target patches both targets at every frame or selected frames to predict the object location in a new frame,including traditional and deep-neural-network-based ones.Traditional trackers based methods like the correlation filter (CF) can learn their models or classifiers from the first frame and update models in the subsequent frames with a small learning rate.Our diverse deep-neural-network-based methods first learn their models offline with vast training data and fine-tune the models online at the initial frame and other frames.However,the solution remains open to balancing the accuracy and latency,especially in deep-neural-network-based methods.Also,network finetuning is forbidden in some practical applications when it is deployed in inference chips,which hinders the wide deployment of these methods.Method We facilitate a novel and straightforward tracker to re-formulate the visual tracking problem from the perspective of video analysis.A new temporal-aware network (TAN) is designed to encode target information from multiple frames,which aims at taking advantage of the temporal appearance consistency and the spatial displacement constraint in the forward path without an online model update.To exchange and fuse information from historical frames input,we introduce temporal aggregation modules in TAN and empower our tracker TAN with the ability to learn spatio-temporal features.To balance the computational burden resulting from the multi-frame inputs and tracking accuracy,we employ a shallow network ResNet-18 as our feature extraction backbone and achieve a high speed of over 70 frame/s.Our tracker runs completely feed-forward and can adapt to the target's appearance changes with our offline-trained,temporal-encoded TAN because previous temporal appearance consistency is maintained by the first frame or historical frames,which require expensive online finetuning to be adaptable.To construct a completed simple tracking pipeline further,we design a novel anchor-free and proposal-free target estimation method,which can detect the four corners,including top-left,top-right,bottom-left,and bottom-right,with a corner detection head in TAN.As target locations can be determined by a pair of top-left and bottom-right corners or top-right and bottom-left,we make use of a center score map to indicate the confidence of these two bounding boxes rather than complicated embedding constraints,which can easily locate the target.Thanks to a corner-based target estimation mechanism,our tracker is capable of handling challenging scenarios for significant changes involved.Result Without bells and whistles,our method has its potentials on several public datasets,such as online object tracking:a benchmark 50(OTB50),OTB100,TrackingNet,a high-quality benchmark for large-scale single object tracking (LaSOT),and a benchmark and simulator for UAV tracking (UAV)123.Our real-time speed optimization and simplified pipeline make TAN more suitable for real applications,especially resource-limited platforms where the large models and online model updates are not supported.Conclusion The proposed tracker will provide a new perspective for single-object tracking by mining the video nature,especially the temporal appearance consistency of this task.
Keywords

订阅号|日报