背景与方向感知的相关滤波跟踪
Background and direction-aware correlation filter tracking
- 2021年26卷第3期 页码:527-541
收稿:2020-05-12,
修回:2020-7-13,
录用:2020-7-20,
纸质出版:2021-03-16
DOI: 10.11834/jig.200139
移动端阅览

浏览全部资源
扫码关注微信
收稿:2020-05-12,
修回:2020-7-13,
录用:2020-7-20,
纸质出版:2021-03-16
移动端阅览
目的
2
针对相关滤波跟踪算法,目标与周围背景进行等值权重训练滤波器导致目标与背景信息相似时,易出现目标漂移问题,本文提出一种基于背景与方向感知的相关滤波跟踪算法。
方法
2
将目标周围的背景信息学习到滤波器中,利用卡尔曼滤波预测目标的运动状态和运动方向,提取目标运动方向上的背景信息,将目标运动方向上与非运动方向上的背景信息进行滤波器训练,保证分配给目标运动方向上背景信息的训练权重高于非运动方向上的权重,增加滤波器对目标和背景信息的分辨能力,采用线性插值法得到最大响应值,用于确定目标位置;构造辅助因子
g
,利用增广拉格朗日乘子法(augmented Lagrange method,ALM)将约束项放到优化函数里,采用交替求解算法(alternating direction method of multipliers,ADMM)将求解目标问题转化为求滤波器和辅助因子的最优解,降低计算复杂度;采用多分辨率搜索方法来估计目标变换的尺度。
结果
2
在数据集OTB50(object tracking benchmark)和OTB100上的平均精确率和平均成功率分别为0.804和0.748,相比BACF(background-aware correlation filters)算法分别提高了7%和16%;在数据集LaSOT上本文算法精确率为0.329,相比BACF(0.239)的精确率得分,更能体现本文算法的鲁棒性。
结论
2
与其他主流算法相比,本文算法在运动模糊、背景杂乱和形变等复杂条件下跟踪效果更加鲁棒。
Objective
2
Although the backgrocund-aware correlation filters (BACF) algorithm increases the number of samples and guarantees the sample quality
the algorithm performs equal weight training on the background information
resulting in the problem of target drift when the target is similar to the background information in complex scenes. The value weight training method ignores the priority of sample collection in the target movement direction and the importance of weight distribution. If the sample sampling method can be effectively designed in the target movement direction and the sample weights can be allocated reasonably
the tracking effect will be improved
and the target drift will be solved effectively. Therefore
this paper adds Kalman filtering to the BACF algorithm framework.
Method
2
For the single-target tracking problem
the algorithm in this paper only takes the motion vector from the predicted value and does not locate the target according to constant speed or acceleration. The target position is still determined by the response peak value. The maximum response value is obtained by linear interpolation. The target location is determined. When the speed is zero
the response peak of the target positioning in the previous frame image is still used to determine the target position in the current frame image. Kalman filtering is used to predict the target's motion state and direction
and the background information in the target's motion direction and non-motion direction is subjected to filter training to ensure that the training weight assigned to the background information in the target's motion direction is higher than the non-motion direction weights. The objective function problem is optimized and solved
auxiliary factor g is constructed
the augmented Lagrangian multiplier methodis usedto place the constraints in the optimization function
and the alternating solution method (alternating direction method of multipliers(ADMM) is used to optimize the filter and auxiliary factors
andreduce computational complexity.
Result
2
This paper selects standard data sets OTB50(abject tracking benchmark) and OTB100 to facilitate experimental comparison with the current mainstream algorithms. OTB50 is a commonly used tracking dataset
which contains 50 groups of video sequences and has 11 different attributes
such as lighting changes and occlusions. OTB100 containsan additional 50 test sequences based on OTB50. Each sequence may have different video attributes
making tracking challenges difficult. The algorithm in this paper uses one-pass evaluation (OPE) to analyze the performance of the algorithm
and tracking accuracy and success rate as the evaluation criteria. In video sequence Board_1
the algorithm in this paper
ECO(efficient convolution operators)
SRDCF(spatially regularized correlation filters)
and DeepSTRCF(deep spatial-temporal regularized) can achieve accurate tracking
but the speed of the algorithm in this paper is substantially better than that of the three two algorithms of ECO
SRDCF
and DeepSTRCF. In video sequence Panda_1
the tracking effect of the algorithm in this paper is stable under a low resolution. In video sequence Box_1
only the algorithm in this paper can accurately track the target from the initial frame to the last frame because the Kalman filter is used to predict the direction of the target and distinguish the target from the background information effectively. The tracker is prevented from tracking other similar background information. Experimental results show that the average accuracy rate and average success rate of the algorithm on datasets OTB50 and OTB100 are 0.804 and 0.748
respectively
which are 7% and 16% higher than the BACF algorithm
respectively. In tracking the experimental sequence
the tracking success rate and tracking accuracy of the algorithm in this paper are high and meet the real-time requirements
and the tracking performance is good.
Conclusion
2
This paper uses Kalman filtering to predict the direction and state of the target
assigns different weights to the background information in different directions
performs filter training
and obtains the maximum response value based on linear interpolation to determine the target position. The ADMM method is used to transform the problem of solving the target model into two subproblems with the optimal solution. The online adaptive method is used to solve the problem of target deformation in model update. Numerous comparative experiments are performed on OTB50 and OTB100 datasets. On OTB50
the algorithm success rate and accuracy rate of this paper are 0.720 and 0.777
respectively. On OTB100
the algorithm success rate and accuracy rate of this paper are 0.773 and 0.828
respectively.Both are better than the current mainstream algorithms
which shows that the algorithm in this paper has better accuracy and robustness. In background sensing
the sample sampling method and weight allocation directly affect target tracking performance. The next step is to conduct an in-depth research on the construction of a speed-adaptive sample collection model.
Bertinetto L, Valmadre J, Golodetz S, Miksik O and Torr P H S. 2016. Staple: complementary learners for real-time tracking//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 1401-1409[ DOI: 10.1109/CVPR.2016.156 http://dx.doi.org/10.1109/CVPR.2016.156 ]
Bolme D S, Beveridge J R, Draper B A and Lui Y M. 2010. Visual object tracking using adaptive correlation filters//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE: 2544-2550[ DOI: 10.1109/CVPR.2010.5539960 http://dx.doi.org/10.1109/CVPR.2010.5539960 ]
Danelljan M, Häger G, Khan F S and Felsberg M. 2014. Accurate scale estimation for robust visual tracking//Proceedings of British Machine Vision Conference. London, UK: BMVA Press: 1-65[ DOI: 10.5244/C.28.65 http://dx.doi.org/10.5244/C.28.65 ].
Danelljan M, Häger G, Khan F S and Felsberg M. 2015. Learning spatially regularized correlation filters for visual tracking//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 4310-4318[ DOI: 10.1109/ICCV.2015.490 http://dx.doi.org/10.1109/ICCV.2015.490 ]
Danelljan M, Robinson A, Khan F S and Felsberg M. 2016. Beyond correlation filters: learning continuous convolution operators for visual tracking//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, Netherlands: Springer: 472-488[ DOI: 10.1007/978-3-319-46454-1_29 http://dx.doi.org/10.1007/978-3-319-46454-1_29 ]
Fan H, Lin L T, Yang F, Chu P, Deng G, Yu S J, Bai H X, Xu Y, Liao C Y and Ling H B. 2019. LaSOT: a high-quality benchmark for large-scale single object tracking//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE: #00552[ DOI: 10.1109/CVPR.2019.00552 http://dx.doi.org/10.1109/CVPR.2019.00552 ]
Galoogahi H K, Fagg A and Lucey S. 2017. Learning background-aware correlation filters for visual tracking//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1144-1152[ DOI: 10.1109/ICCV.2017.129 http://dx.doi.org/10.1109/ICCV.2017.129 ]
Galoogahi H K, Sim T and Lucey S. 2015. Correlation filters with limited boundaries//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE:4630-4638[ DOI: 10.1109/CVPR.2015.7299094 http://dx.doi.org/10.1109/CVPR.2015.7299094 ]
Henriques J F, Caseiro R, Martins P and Batista J. 2012. Exploiting the circulant structure of tracking-by-detection with kernels//Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer: 702-715[ DOI: 10.1007/978-3-642-33765-9_50 http://dx.doi.org/10.1007/978-3-642-33765-9_50 ]
Henriques J F, Caseiro R, Martins P and Batista J. 2015. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3): 583-596[DOI: 10.1109/TPAMI.2014.2345390]
Li C, Lu C Y, Zhao X, Zhang B M and Wang H Y. 2018. Scale adaptive correlation filtering tracing algorithm based on feature fusion. Acta Optica Sinica, 38(5): #0515001
李聪, 鹿存跃, 赵珣, 章宝民, 王红雨. 2018. 特征融合的尺度自适应相关滤波跟踪算法. 光学学报, 38(5): #0515001[DOI: 10.3788/AOS201838.0515001]
Li F,Tian C, Zuo W M, Zhang L and Yang M H. 2018. Learning spatial-temporal regularized correlation filters for visual tracking//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 4904-4913[ DOI: 10.1109/CVPR.2018.00515 http://dx.doi.org/10.1109/CVPR.2018.00515 ]
Li Y and Zhu J K. 2014. A scale adaptive kernel correlation filter tracker with feature integration//Agapito L, Bronstein M M and Rother C, eds. Computer Vision-ECCV 2014 Workshops. Zurich, Switzerland: Springer: 254-265[DOI: 10.1007/978-3-319-16181-5_18] http://dx.doi.org/10.1007/978-3-319-16181-5_18] .
Liu B, Xu T F, Li X M, Shi G K and Huang B. 2019. Adaptive context-aware correlation filter tracking. Chinese Journal of Optics, 12(2): 265-273
刘波, 许廷发, 李相民, 史国凯, 黄博. 2019. 自适应上下文感知相关滤波跟踪. 中国光学, 12(2): 265-273[DOI: 10.3788/CO.20191202.0265]
Lu H C, Li P X and Wang D. 2018. Visual object tracking: a survey. Pattern Recognition and Artificial Intelligence, 31(1): 61-76
卢湖川, 李佩霞, 王栋. 2018. 目标跟踪算法综述. 模式识别与人工智能, 31(1): 61-76[DOI: 10.16451/j.cnki. issn1003-6059.201801006]
Meng L and Yang Y. 2019. A survey of object tracking algorithms. Acta Automatica Sinica, 45(7): 1244-1260
孟琭, 杨旭. 2019. 目标跟踪算法综述. 自动化学报, 45(7): 1244-1260[DOI: 10.16383/j.aas.c180277]
Mueller M, Smith N and Ghanem B. 2017. Context-aware correlation filter tracking//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1387-1395[ DOI: 10.1109/CVPR.2017.152 http://dx.doi.org/10.1109/CVPR.2017.152 ]
Pu D G and Jin Z. 2010. New Lagrangian multiplier methods. Journal of Tongji University (Natural Science), 38(9): 1387-1391
濮定国, 金中. 2010. 新的拉格朗日乘子方法. 同济大学学报(自然科学版), 38(9): 1387-1391[DOI: 10.3969/j.issn.0253-374x.2010.09.026]
Song R C, He X H and Wang Z Y. 2018. Complementary object tracking based on directional reliability. Acta Optica Sinica, 38(10): #1015001
宋日成, 何小海, 王正勇. 2018. 基于方向可靠性的互补跟踪算法. 光学学报, 38(10): #1015001[DOI: 10.3788/AOS201838.1015001]
Wang N Y and Yeung D Y. 2013. Learning a deep compact image representation for visual tracking//Proceedings of the 26th International Conference on Neural Information Processing System. Red Hook, USA: Curran Associates Inc.: 809-817
Wang Y, Yin W T and Zeng J S. 2019. Global convergence of ADMM in nonconvex nonsmooth optimization. Journal of Scientific Computing, 78(1): 29-63[DOI: 10.1007/s10915-018-0757-z]
Welch G and Bishop G. 2001. An introduction to the kalman filter: SIGGRAPH 2001 course 8//Computer Graphics, Annual Conference on Computer Graphics and Interactive Techniques. Los Angeles, USA: ACM Press, Addison-Wesley Publishing Company
Wu Y, Lim J and Yang M H. 2013. Online object tracking: a benchmark//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA: IEEE: 2411-2418[ DOI: 10.1109/CVPR.2013.312 http://dx.doi.org/10.1109/CVPR.2013.312 ]
Wu Y, Lim J and Yang M H. 2015. Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9): 1834-1848[DOI: 10.1109/TPAMI.2014.2388226]
Xu Y B, Xu K, Wan J W, Xiong Z D and Li Y Y. 2018. Research on particle filter tracking method based on kalman filter//Proceedings of the 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). Xi'an, China: IEEE: 1564-1568[ DOI: 10.1109/IMCEC.2018.8469578 http://dx.doi.org/10.1109/IMCEC.2018.8469578 ]
Yin M F, Bo Y M, Zhu J L and Wu P L. 2019. Multi-scale context-aware correlation filter tracking algorithm based on channel reliability. Acta Optica Sinica, 39(5): #0515002
尹明锋, 薄煜明, 朱建良, 吴盘龙. 2019. 基于通道可靠性的多尺度背景感知相关滤波跟踪算法. 光学学报, 39(5): #0515002[DOI: 10.3788/AOS201939.0515002]
Yun S, Choi J, Yoo Y, Yun K M and Choi J Y. 2017. Action-decision networks for visual tracking with deep reinforcement learning//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 1349-1358[ DOI: 10.1109/CVPR.2017.148 http://dx.doi.org/10.1109/CVPR.2017.148 ]
相关作者
相关机构
京公网安备11010802024621