Zhang Canlong, Tang Yanping, Li Zhixin, Wang Zhiwen, Cai Bing. Joint tracking of infrared-visible target via spatiogram representation[J]. Journal of Image and Graphics, 2017, 22(4): 492-501. DOI: 10.11834/jig.20170409.
This study proposes a joint spatiogram tracker after considering the issues of real time and accuracy within the tracking system of a multiple sensor. In the proposed method
a second-order spatiogram is used to represent a target
and the similarity between the infrared candidate and its target model
as well as that between the visible candidate and its target model
is integrated into a novel objective function for evaluating target state. A joint target center-shift formula is established by performing a derivation method similar to the mean shift algorithm on the objective function. Finally
the optimal target location is obtained recursively by applying the mean shift procedure. In addition
the adaptive weight adjustment method and the model update method based on a particle filter are designed. We tested the proposed tracker on four publicly available data sets. These data sets involved general tracking difficulties
such as the absence of light at night; shade
cluster
and overlap among targets; and occlusion. We also compared our method with joint histogram tracking (JHT
the degenerated version of our method) and state-of-the-art algorithms
such as the L tracker (L1T) and the fuzzified region dynamic fusion tracker (FRD)
on more than four infrared-visible image sequences. For the quantitative comparison
we use four evaluation metrics
namely
the average center offset error
the average overlap ratio
the average success rate
and the average calculation time. The corresponding test results of each algorithm in the four data sets are as follows: proposed method (6.664
0.702
0.921
0.009)
L1T track infrared target(25.53
0.583
0.742
0.363)
L1T track visible target(31.21
0.359
0.459
0.293)
FRD (10.73
0.567
0.702
0.565)
and JHT(15.07
0.622
0.821
0.001). In terms of overlap ratio
the average precision of our method is approximately 23%
14%
and 8% higher than those of L1T
FRD
and JHT
respectively. In terms of success ratio
the average value of our method is approximately 32%
46%
and 10% higher than the corresponding trackers. The proposed fusion tracker is superior to a single-source tracker in addressing cluttered background
light change
and spatial information retention. It is suitable for tracking targets in certain situations
such as when light is absent at night; shade
cluster
and overlap among targets; and occlusion. The method runs at a rate of 30 frame/s
thereby allowing simultaneous tracking of up to four targets in real time.