Current Issue Cover
融合图像显著性与特征点匹配的形变目标跟踪

杨勇, 闫钧华, 井庆丰(南京航空航天大学航天学院, 南京 210016)

摘 要
目的 针对目标在跟踪过程中出现剧烈形变,特别是剧烈尺度变化的而导致跟踪失败情况,提出融合图像显著性与特征点匹配的目标跟踪算法。方法 首先利用改进的BRISK(binary robust invariant scalable keypoints)特征点检测算法,对视频序列中的初始帧提取特征点,确定跟踪算法中的目标模板和目标模板特征点集合;接着对当前帧进行特征点检测,并与目标模板特征点集合利用FLANN(fast approximate nearest neighbor search library)方法进行匹配得到匹配特征点子集;然后融合匹配特征点和光流特征点确定可靠特征点集;再后基于可靠特征点集和目标模板特征点集计算单应性变换矩阵粗确定目标跟踪框,继而基于LC(local contrast)图像显著性精确定目标跟踪框;最后融合图像显著性和可靠特征点自适应确定目标跟踪框。当连续三帧目标发生剧烈形变时,更新目标模板和目标模板特征点集。结果 为了验证算法性能,在OTB2013数据集中挑选出具有形变特性的8个视频序列,共2214帧图像作为实验数据集。在重合度实验中,本文算法能够达到0.567 1的平均重合度,优于当前先进的跟踪算法;在重合度成功率实验中,本文算法也比当前先进的跟踪算法具有更好的跟踪效果。最后利用Vega Prime仿真了无人机快速抵近飞行下目标出现剧烈形变的航拍视频序列,序列中目标的最大形变量超过14,帧间最大形变量达到1.72,实验表明本文算法在该视频序列上具有更好的跟踪效果。本文算法具有较好的实时性,平均帧率48.6帧/s。结论 本文算法能够实时准确的跟踪剧烈形变的目标,特别是剧烈尺度变化的目标。
关键词
Deformation object tracking based on the fusion of invariant scalable key point matching and image saliency

Yang Yong, Yan Junhua, Jing Qingfeng(College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)

Abstract
Objective A target tracking algorithm is proposed to address violent deformation of the target during the tracking process, especially a dramatic scale change, which leads to tracking failure. This algorithm is considered by fusing invariant scalable key points matching and image saliency. Method First, the target template and its feature point set are determined using the initial frame of a video sequence. Feature points are extracted from the initial frame by the improved BRISK feature point detection algorithm. The feature points of the current frame are extracted and matched with the target template feature points by using the FLANN method to obtain a subset of matching feature points. Second, the matching feature points and the light flow feature points are fused to determine the set of reliable feature points. Third, on the basis of the set of reliable feature points and the target template feature points, a homography transformation matrix is calculated to determine the target tracking box. The target tracking box is located based on image saliency, which is calculated by using the LC method. Finally, the target tracking box is adaptively determined by fusing image saliency and reliability feature points. In response to a highly severe non-rigid deformation, the target template and the target template feature point set are updated when the target in three consecutive frames is drastically deformed. Result A total of 2 214 frames of images with intense deformation of eight video sequences were selected from OTB2013 dataset as experimental datasets to verify the performance of the proposed algorithm. In a coincidence degree experiment, the algorithm in this paper can achieve an average coincidence degree of 0.567 1, which is better than the current advanced tracking algorithm. In the experiment of coincidence degree success rate, the proposed algorithm also has a better tracking effect than the current advanced tracking algorithm. Vega Prime is used to simulate UAV aerial video sequences, in which the object undergoes dramatic deformation. The maximum deformation of the target in the sequence exceeds 14, and the maximum deformation between frames reaches 1.72. Experiments show that the proposed algorithm has an improved tracking effect on this video sequence. Conclusion Experimental results show that the proposed algorithm can track the target of violent deformation, especially the target of violent scale change. The proposed algorithm has good real-time performance and an average frame rate of 48.6 frames/second.
Keywords

订阅号|日报