发布时间: 2017-04-16 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.20170410 2017 | Volume 22 | Number 4 图像理解和计算机视觉

1. 辽宁工程技术大学电子与信息工程学院, 葫芦岛 125105;
2. 辽宁工程技术大学软件学院, 葫芦岛 125105;
3. 辽宁工程技术大学工商管理学院, 葫芦岛 125105
 收稿日期: 2016-10-10; 修回日期: 2016-12-22 基金项目: 国家自然科学基金项目（61172144）；辽宁省科技攻关计划项目（2012216026） 第一作者简介: 刘大千 (1992-), 男, 辽宁工程技术大学电子与信息工程学院矿山空间信息工程专业博士研究生, 主要研究方向为图像与视觉信息计算、运动目标检测与跟踪。E-mail:liudaqianlntu@163.com 中图法分类号: TP301.6 文献标识码: A 文章编号: 1006-8961(2017)04-0502-14

# 关键词

Anti-interference contour tracking under prior model constraint
Liu Daqian1, Liu Wanjun2, Fei Bowen3
1. School of Electronic and Information Engineering, Liaoning Technical University, Huludao 125105, China;
2. School of Software, Liaoning Technical University, Huludao 125105, China;
3. School of Business and Management, Liaoning Technical University, Huludao 125105, China
Supported by: National Natural Science Foundation of China (61172144)

# Abstract

Objective Target tracking plays an important role in computer vision, which is widely applied in intelligent traffic, robot vision, and motion capture. Experts and scholars have proposed numerous excellent target tracking algorithms in recent years to avoid the influence of illumination changes, target deformation, partial occlusion (even global occlusion), complex background, and other factors. One of the popular topics in the field of target tracking is determining how to deal with the change in target contour. A level set can better optimize the topology structure of a target, and thus, many researchers have adopted the level set method for the contour extraction and tracking of targets. In 2004, Freedman used the Bhattacharyya distance and Zhang used the Kullback-Leibler distance in 2005, respectively, to determine the target layout and locate the best candidate region. Accordingly, these researchers combined foreground/background matching flow and proposed a combined flow method. However, these two algorithms depend on the initial target selection. When the initial contour differs from the actual contour of the object, the algorithms will require multiple iterations to converge. Chiverton proposed an online contour tracking algorithm based on the learning model. This algorithm establishes a prior target model through initial target morphology and constrains the contour tracking process by using the target model. Ning proposed an approach that applied the morphological information of the initial delineation of the target to establish the prior model. This researcher also adopted the level set method for the implicit representation of the foreground and background regions of the target information. The distribution area of the foreground/background target is determined using the Bhattacharyya similarity measure to realize accurate tracking. Rathi adopted the geometric active contour model to track a deformed target that was moving fast in the framework of a particle filter. The algorithm does not only achieve affine transformation, but can also accurately estimate the non-affine transform target. The contour extraction methods based on a level set are extensively applied to tracking moving targets. Traditional methods can be easily affected by the local occlusion of other targets and the complex background. A novel tracking approach based on anti-interference contour tracking under the prior model constraint is proposed to solve the aforementioned problems. Method The proposed approach uses a simple model matching algorithm to track the previous five frames of the image sequences. The training sample set is established based on several super pixel blocks obtained via super pixel segmentation. The super pixel block sets with the same color feature are used to establish the cluster sets by using the mean shift algorithm. The confidence probability of each cluster is then calculated, and a prior model of the target is constructed according to the confidence degree of clusters. Subsequently, the target contour is extracted using the segmentation method of the level set. This study proposes a novel decision-making method to avoid the influences of partial occlusion and complex background. This method determines whether a shape prior model is required to constrain the level set evolution process, and thus, obtain more robust tracking results. Lastly, an appearance model online-updating algorithm is proposed. This algorithm can append the appropriate feature compensations to feature sets to improve the updating accuracy of the appearance model. The algorithm uses the evolution results of shape and color features, and then, the feature loss and redundant feature problems are effectively solved when the target is occluded. Result Six sets of common video sequences are used in the test to verify the performance of the proposed algorithm. The video sequence covers challenging factors, such as illumination change, partial occlusion, target deformation, and complex background. The algorithm is also compared with available contour tracking algorithms, such as the density matching and level set, the learning distribution metric, joint registration, and active contour segmentation. The proposed contour tracking algorithm can achieve the same or even higher tracking accuracy compared with excellent contour tracking algorithms. The average center errors in the video sequences Fish, Face1, Face2, Shop, Train, and Lemming are 3.46, 7.16, 3.82, 13.42, 14.72, and 12.47, respectively. The tracking overlap ratios of the aforementioned video sequences are 0.92, 0.74, 0.85, 0.77, 0.73, and 0.82, respectively. The average running speeds in the aforementioned video sequences are 4.27 frame/s, 4.03 frame/s, 3.11 frame/s, 2.94 frame/s, 2.16 frame/s, and 1.71 frame/s, respectively. Conclusion Experiment results indicate that using the prior model constraint of the target and implementing decision-making in the contour extraction process provide the algorithm with accurate tracking and strong adaptability characteristics under the conditions of partial occlusion, target deformation, target rotation, and complex background. The characteristics of the proposed approach are as follows:1) a prior model of the target is built by training the sample set, removing the interference of the non-target information in the image, and providing the prior model with a more accurate description of the target; 2) a decision-making method is proposed to judge whether a prior model is required. If the constraints of a prior model must be introduced, then the results of the shape subspace and the evolution in color space are fused in the level set segmentation process; 3) an appearance model online-updating algorithm is proposed, which can append the appropriate feature compensations to the feature sets, thereby ensuring the accuracy of the model.

# Key words

prior model; level set; decision-making; feature compensation; contour tracking

# 0 引言

1) 通过训练样本集聚类构建的目标先验模型，除去图像中非目标信息的干扰，使得先验模型对目标的描述更准确。

2) 提出一种决策判定方法，用来判断是否需要引入先验模型。

3) 若需要引入先验模型的约束，则在水平集分割的过程中融合在形状子空间的结果和在颜色空间的演化结果。

4) 提出一种在线模型更新算法，在特征集中加入适当特征补偿，保证模型的准确性。

# 1 算法概述

AC-PMC算法分为建立目标的先验模型、抗干扰轮廓提取及模型更新3个过程。算法的流程如图 1所示。

# 2.1 建立目标的先验模型

 ${C^c}_i = \frac{{Y\left( i \right)-E\left( i \right)}}{{S\left( i \right)}}, \forall i = 1, \ldots, n$ (1)

 $\begin{array}{l} p\left( \varphi \right) = \left\{ {{p_u}\left( \varphi \right)} \right\}\\ \sum\limits_{u = 1}^m {{p_u}} \left( \varphi \right) = 1, u \in 1, \ldots, m \end{array}$ (4)

$\left\{ {pixe{l_i}} \right\}i \in 1, \ldots, n$表示目标区域的像素集合，利用文献[11]的特征空间函数$b(x)$对目标区域的像素集合$\{ pixe{l_i}\}$进行量化处理，记$b(pixe{l_i})$，则目标特征概率为

 ${q_u} = \frac{1}{n}\sum\limits_{i{\rm{ }} = {\rm{ }}1}^n {\delta \left[{b{\rm{ }}\left( {pixe{l_i}} \right)-u} \right]}$ (5)

 $\begin{array}{l} {p_u}\left( \varphi \right){\rm{ }} = {\rm{ }}\frac{1}{{\sum\limits_{i{\rm{ }} = {\rm{ }}1}^n {H\left( {\varphi \left( {pixe{l_i}} \right)} \right)} }} \times \\ \sum\limits_{i{\rm{ }} = {\rm{ }}1}^n {H\left( {\varphi \left( {pixe{l_i}} \right)} \right)\delta [b{\rm{ }}\left( {pixe{l_i}} \right)-u]} \end{array}$ (6)

# 2.2.2 基于形变的相似性度量

Bhattacharyya相似性度量广泛应用于目标跟踪领域，其测量两个离散或连续概率密度分布的相似性，Bhattacharyya系数越高，则两个概率密度分布的相似性越大。相似性度量的表达式为

 $E\left( \varphi \right) = \sum\limits_{u = 1}^m {\sqrt {{p_u}\left( \varphi \right){q_u}} }$ (7)

 $\begin{array}{c} E\left( \varphi \right) = \sum\limits_{u = 1}^m {\sqrt {{p_u}\left( \varphi \right){q_u}} } = \sum\limits_{u = 1}^m {\sqrt {{p_u}\left( \varphi \right){q_u}} } + \\ \frac{1}{2}{p_u}\left( \varphi \right)\sum\limits_{u = 1}^m {{q_u}\sqrt {\frac{1}{{{p_u}\left( {{\varphi _0}} \right){q_u}}}} }-\\ \frac{1}{2}{p_u}\left( {{\varphi _0}} \right)\sum\limits_{u = 1}^m {{q_u}\sqrt {\frac{1}{{{p_u}\left( {{\varphi _0}} \right){q_u}}}} } = \\ \sum\limits_{u = 1}^m {\sqrt {{p_u}\left( {{\varphi _0}} \right){q_u}} } \frac{1}{2}{p_u}\left( \varphi \right)\sum\limits_{u = 1}^m {\sqrt {\frac{{{q_u}}}{{{p_u}\left( {{\varphi _0}} \right)}}} }-\\ \frac{1}{2}\sum\limits_{u = 1}^m {{q_u}\sqrt {{p_u}\left( {{\varphi _0}} \right){q_u}} } = \frac{1}{2}(\sum\limits_{u = 1}^m {\sqrt {{p_u}\left( {{\varphi _0}} \right){q_u}} } + \\ \sum\limits_{u = 1}^m {{p_u}\left( \varphi \right)\sqrt {\frac{{{q_u}}}{{{p_u}\left( {{\varphi _0}} \right)}}} } \end{array}$

 $\begin{array}{c} E\left( \varphi \right){\rm{ }} = {\rm{ }}\frac{1}{2}\left( {\sum\limits_{u = 1}^m {\sqrt {{p_u}\left( {{\varphi _0}} \right){q_u}} } + {\rm{ }}\sum\limits_{u = 1}^m {{p_u}\left( \varphi \right)\sqrt {\frac{{{q_u}}}{{{p_u}\left( {{\varphi _0}} \right)}}} } } \right) = \\ \frac{1}{2}\sum\limits_{u = 1}^m {\sqrt {{p_u}\left( {{\varphi _0}} \right){q_u}} } + {\rm{ }}\frac{1}{{2\sum\limits_{i{\rm{ }} = {\rm{ }}1}^n {H\left( {\varphi \left( {pixe{l_i}} \right)} \right)} }} \times \\ \sum\limits_{i{\rm{ }} = {\rm{ }}1}^n {\sum\limits_{u = 1}^m {\sqrt {\frac{{{q_u}}}{{{p_u}\left( {{\varphi _0}} \right)}}} } } H\left( {\varphi \left( {pixe{l_i}} \right)} \right)\delta \left[{b{\rm{ }}\left( {pixe{l_i}} \right)-u} \right] \end{array}$

# 参考文献

• [1] Vatavu A, Danescu R, Nedevschi S. Stereovision-based multiple object tracking in traffic scenarios using free-form obstacle delimiters and particle filters[J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(1): 498–511. [DOI:10.1109/TITS.2014.2366248]
• [2] Lian F, Han C Z, Liu W F, et al. Tracking partly resolvable group targets using SMC-PHDF[J]. Acta Automatica Sinica, 2010, 36(5): 731–741. [连峰, 韩崇昭, 刘伟峰, 等. 基于SMC-PHDF的部分可分辨的群目标跟踪算法[J]. 自动化学报, 2010, 36(5): 731–741. ] [DOI:10.3724/SP.J.1004.2010.00731]
• [3] Khatoonabadi S H, Bajic I V. Video object tracking in the compressed domain using spatio-temporal Markov random fields[J]. IEEE Transactions on Image Processing, 2013, 22(1): 300–313. [DOI:10.1109/TIP.2012.2214049]
• [4] Wang M H, Liang Y, Liu F M, et al. Object tracking based on component-level appearance model[J]. Journal of Software, 2015, 26(10): 2733–2747. [王美华, 梁云, 刘福明, 等. 部件级表观模型的目标跟踪方法[J]. 软件学报, 2015, 26(10): 2733–2747. ] [DOI:10.13328/j.cnki.jos.004737]
• [5] Ganta R R, Zaheeruddin S, Baddiri N, et al. Segmentation of oil spill images with illumination-reflectance based adaptive level set model[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2012, 5(5): 1394–1402. [DOI:10.1109/JSTARS.2012.2201249]
• [6] Li X, Dick A, Shen C H, et al. Incremental learning of 3D-DCT compact representations for robust visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(4): 863–881. [DOI:10.1109/TPAMI.2012.166]
• [7] Smeulders A W M, Chu D M, Cucchiara R, et al. Visual tracking:an experimental survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442–1468. [DOI:10.1109/TPAMI.2013.230]
• [8] Freedman D, Zhang T. Active contours for tracking distributions[J]. IEEE Transactions on Image Processing, 2004, 13(4): 518–526. [DOI:10.1109/TIP.2003.821445]
• [9] Zhang T, Freedman D. Improving performance of distribution tracking through background mismatch[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(2): 282–287. [DOI:10.1109/TPAMI.2005.31]
• [10] Chiverton J, Xie X H, Mirmehdi M. Automatic bootstrapping and tracking of object contours[J]. IEEE Transactions on Image Processing, 2012, 21(3): 1231–1245. [DOI:10.1109/TIP.2011.2167343]
• [11] Ning J F, Zhang L, Zhang D, et al. Joint registration and active contour segmentation for object tracking[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(9): 1589–1597. [DOI:10.1109/TCSVT.2013.2254931]
• [12] Rathi Y, Vaswani N, Tannenbaum A, et al. Tracking deforming objects using particle filtering for geometric active contours[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(8): 1470–1475. [DOI:10.1109/TPAMI.2007.1081]
• [13] Oron S, Bar-Hillel A, Levi D, et al. Locally orderless tracking[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI:IEEE, 2012:1940-1947.[DOI:10.1109/CVPR.2012.6247895]
• [14] Wang S, Lu H C, Yang F, et al. Superpixel tracking[C]//Proceedings of 2011 IEEE International Conference on Computer Vision. Barcelona, Spain:IEEE, 2011:1323-1330.[DOI:10.1109/ICCV.2011.6126385]
• [15] Comaniciu D, Meer P. Mean shift:a robust approach toward feature space analysis[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002, 24(5): 603–619. [DOI:10.1109/34.1000236]
• [16] Chan T F, Vese L A. Active contours without edges[J]. IEEE Transactions on Image Processing, 2001, 10(2): 266–277. [DOI:10.1109/83.902291]
• [17] Clement J, Novas N, Gazquez J A, et al. An active contour computer algorithm for the classification of cucumbers[J]. Computers and Electronics in Agriculture, 2013, 92: 75–81. [DOI:10.1016/j.compag.2013.01.006]
• [18] Ma B, Wu Y W. Learning distribution metric for object contour tracking[C]//Proceedings of 2011 International Conference on Multimedia Technology. Hangzhou:IEEE, 2011:3120-3123.[DOI:10.1109/ICMT.2011.6001851]
• [19] Xu Y H, Tian Z H, Zhang Y Q, et al. Adaptively combining color and depth for human body contour tracking[J]. Acta Automatica Sinica, 2014, 40(8): 1623–1634. [徐玉华, 田尊华, 张跃强, 等. 自适应融合颜色和深度信息的人体轮廓跟踪[J]. 自动化学报, 2014, 40(8): 1623–1634. ] [DOI:10.3724/SP.J.1004.2014.01623]
• [20] Wang F, Fang S. Visual tracking based on the discriminative dictionary and weighted local features[J]. Journal of Image and Graphics, 2014, 19(9): 1316–1323. [王飞, 房胜. 加权局部特征结合判别式字典的目标跟踪[J]. 中国图象图形学报, 2014, 19(9): 1316–1323. ] [DOI:10.11834/jig.20140908]
• [21] Yang B, Lin G Y, Zhang W G, et al. Robust object tracking incorporating residual unscented particle filter and discriminative sparse representation[J]. Journal of Image and Graphics, 2014, 19(5): 730–738. [杨彪, 林国余, 张为公, 等. 融合残差Unscented粒子滤波和区别性稀疏表示的鲁棒目标跟踪[J]. 中国图象图形学报, 2014, 19(5): 730–738. ] [DOI:10.11834/jig.20140511]