高置信度互补学习的实时目标跟踪
Real-time object tracking based on high-confidence complementary learning
- 2019年24卷第8期 页码:1315-1326
收稿:2018-12-21,
修回:2019-2-22,
纸质出版:2019-08-16
DOI: 10.11834/jig.180684
移动端阅览

浏览全部资源
扫码关注微信
收稿:2018-12-21,
修回:2019-2-22,
纸质出版:2019-08-16
移动端阅览
目的
2
判别式目标跟踪算法在解决模型漂移问题时通常都是在预测结果的基础上构建更可靠的样本或采用更健壮的分类器,从而忽略了高效简洁的置信度判别环节。为此,提出高置信度互补学习的实时目标跟踪算法(HCCL-Staple)。
方法
2
将置信度评估问题转化为子模型下独立进行的置信度计算与互补判别,对相关滤波模型计算输出的平均峰值相关能量(APCE),结合最大响应值进行可靠性判定,当二者均以一定比例大于历史均值时,判定为可靠并进行更新,将颜色概率模型的输出通过阈值处理转化为二值图像,并基于二值图像形态学提取像素级连通分量属性(PCCP),综合考虑连通分量数量、最大连通分量面积及矩形度进行可靠性判别,当置信度参数多数呈高置信度形态时,判定为可靠,进行更新;否则,判定为不可靠,降低该模型的融合权重并停止更新。
结果
2
在数据集OTB-2015上的实验结果表明,HCCL-Staple算法与原算法相比,距离精度提高了3.2%,成功率提高了2.7%,跟踪速度为32.849帧/s,在颜色特征适应性较弱的场景和目标被遮挡的复杂场景中均能有效防止模型漂移,与当前各类主流的跟踪算法相比具有较好的跟踪效果。
结论
2
两种子模型的置信度判别方法均能针对可能产生低置信度结果的敏感场景进行有效估计,且对输出形式相同的其他模型在置信度判别上具有一定的适用性。互补使用上述判别策略的HCCL-Staple算法能够有效防止模型漂移,保持高速的同时显著提升跟踪精度。
Objective
2
Object tracking is an important research subject in the computer vision area. It has a wide range of applications in surveillance and human-computer interaction. Recently
trackers based on the correlation filter have shown excellent performance because of their great robustness and high efficiency. According to correlation filter theory
an increasing number of trackers improve performance through feature fusion
such as introducing color features to strengthen the trackers' recognition ability. However
color features in some scenes with the problems of similar color objects or background clutter existing are not robust
and they can be used to evaluate the confidence of color models. In addition
traditional methods based on the correlation filter usually update the model every frame without confidence evaluation
which can lead to model drift when the target is occluded or the trackers predict an incorrect position in the last frame. Many trackers solve the above problems by conducting more reliable samples or adopting stronger classifiers
which sacrifices tracking speed. Our work focuses on incorrect samples by applying confidence evaluation because we do not need to take note of their internal details and feature structures. However
defining a comprehensive and robust evaluation index that satisfies the requirement of high speed is difficult. Therefore
a real-time object tracking method based on high-confidence complementary learning strategy is proposed.
Method
2
Our method divides the confidence problem into computing confidence independently and complementary reliability judging in the scenes with specific attributes easily leading to unreliable learning and sensitive to confidence evaluation in the sub-model. First
the average peak-to-correlation energy (APCE) for the correlation filter model is computed in Staple
which constitutes the confidence evaluation criteria with the maximum of the model response map. The result is considered high confidence only if the two criteria of the current frame are greater than their historical average values with certain ratios. Then
the correlation filter model is updated
including the translation filter and the scale filter. Next
the output of the color probability model
called the pixel-wise color probability graph in Staple
is transformed into a binary image by using the classic threshold processing method Otus
and the connected components are extracted from the binary image open operation in advance. We regard the connected component that contains the most pixels as the main connected component of the binary image. With an overall consideration of the PCCP properties
including the area of the main connected component
the amount of all connected components
and the rectangularity about the main connected component
the result is considered high-confidence. The color probability model is then updated when most of the property values take on the forms that stand for high confidence. Otherwise
the result is considered low confidence. Thus
the fusion weight is reduced
and updates to the model are terminated.
Result
2
As shown in the experiment results for the dataset OTB-2015
the distance precision of the HCCL(High confidence complementary learning)-Staple adopted high-confidence complementary learning strategy increased by 3.2%
and the success rate increased by 2.7% in comparison with the primary algorithm Staple. These improvements were achieved at a high speed of 32.849 frames per second. In the particular scenes where color features are weak to some attributes such as poor illumination condition
similar objects
background clutter
and in complex scenes where occlusion or out-of-view occurs frequently
HCCL-Staple can avoid the problem of model drift efficiently. Moreover
HCCL-Staple outperforms sophisticated trackers according to the OTB benchmark.
Conclusion
2
HCCL-Staple
which adopts the high-confidence complementary learning strategy
is an efficient scheme for addressing the problem of model drift under the traditional learning strategy in challenging scenes with occlusion and interference of similar objects. The method is enhanced by translating the tracker's learning need for reliable samples to reduce or suppress correct samples. The experimental data show that confidence computing methods and the condition for high-confidence judging work well in the correlation filter model and the color probability model and have good applicability for confidence evaluations whose model outputs the same-form result. HCCL-Staple pays less attention to feature details of the target appearance under illumination change
scale change
or deformation and focuses on confidence evaluation. Thus
HCCL-Staple achieves the same tracking effect as tracking algorithms that use complex deep features or machine learning methods and outperforms some state-of-the-art tracking algorithms even without using any sophisticated formulas and optimistic models.
Zhang W, Kang B S. Recent advances in correlation filter-based object tracking:a review[J]. Journal of Image and Graphics, 2017, 22(8):1017-1033.
张微, 康宝生.相关滤波目标跟踪进展综述[J].中国图象图形学报, 2017, 22(8):1017-1033. [DOI:10.11834/jig.170092]
Bertinetto L, Valmadre J, Golodetz S, et al. Staple: complementary learners for real-time tracking[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1401-1409.[ DOI: 10.1109/CVPR.2016.156 http://dx.doi.org/10.1109/CVPR.2016.156 ]
Possegger H, Mauthner T, Bischof H. In defense of color-based model-free tracking[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 2113-2120.[ DOI: 10.1109/CVPR.2015.7298823 http://dx.doi.org/10.1109/CVPR.2015.7298823 ]
Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8):1619-1632.[DOI:10.1109/TPAMI.2010.226]
Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7):1409-1422.[DOI:10.1109/TPAMI.2011.239]
Bolme D S, Beveridge J R, Draper B A, et al. Visual object tracking using adaptive correlation filters[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE, 2010: 2544-2550.[ DOI: 10.1109/CVPR.2010.5539960 http://dx.doi.org/10.1109/CVPR.2010.5539960 ]
Ma C, Yang X K, Zhang C Y, et al. Long-term correlation tracking[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015: 5388-5396.[ DOI: 10.1109/CVPR.2015.7299177 http://dx.doi.org/10.1109/CVPR.2015.7299177 ]
Wang M M, Liu Y, Huang Z Y. Large margin object tracking with circulant feature maps[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 4800-4808.[ DOI: 10.1109/CVPR.2017.510 http://dx.doi.org/10.1109/CVPR.2017.510 ]
Wu Y, Lim J, Yang M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9):1834-1848.[DOI:10.1109/TPAMI.2014.2388226]
Danelljan M, Häger G, Khan F S, et al. Accurate scale estimation for robust visual tracking[C]//Proceedings of 2014 British Machine Vision Conference. Nottingham: BMVA Press, 2014: 65.1-65.11.[ DOI: 10.5244/C.28.65 http://dx.doi.org/10.5244/C.28.65 ]
Henriques J F, Caseiro R, Martins P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3):583-596.[DOI:10.1109/TPAMI.2014.2345390]
Gonzalez R C, Woods R E. Digital Image Processing[M]. Ruan Q Q, Ruan Y Z, trans. Beijing: Publishing House of Electronics Industry, 2011: 407-416.
冈萨雷斯, 伍兹.数字图像处理[M].阮秋琦, 阮宇智, 译.北京: 电子工业出版社, 2011: 407-416.
Li Y, Zhu J K. A scale adaptive kernel correlation filter tracker with feature integration[C]//Proceedings of 2014 European Conference on Computer Vision. Zurich, Switzerland: Springer, 2015: 254-265.[ DOI: 10.1007/978-3-319-16181-5_18 http://dx.doi.org/10.1007/978-3-319-16181-5_18 ]
Wu Y, Lim J, Yang M H. Online object tracking: a benchmark[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013: 2411-2418.[ DOI: 10.1109/CVPR.2013.312 http://dx.doi.org/10.1109/CVPR.2013.312 ]
Danelljan M, Häger G, Khan F S, et al. Learning spatially regularized correlation filters for visual tracking[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2016: 4310-4318.[ DOI: 10.1109/ICCV.2015.490 http://dx.doi.org/10.1109/ICCV.2015.490 ]
Ma C, Huang J B, Yang X K, et al. Hierarchical convolutional features for visual tracking[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 3074-3082.[ DOI: 10.1109/ICCV.2015.352 http://dx.doi.org/10.1109/ICCV.2015.352 ]
Danelljan M, Khan F S, Felsberg M, et al. Adaptive color attributes for real-time visual tracking[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014: 1090-1097.[ DOI: 10.1109/CVPR.2014.143 http://dx.doi.org/10.1109/CVPR.2014.143 ]
相关作者
相关机构
京公网安备11010802024621