A Computational Framework for the Temporal-spatial Alignment of Multi-sensor Image Sequences[J]. Journal of Image and Graphics, 2005, 10(4): 441. DOI: 10.11834/jig.20050490.
A Computational Framework for the Temporal-spatial Alignment of Multi-sensor Image Sequences
A computational framework for the temporal-spatial alignment of multi-sensor image sequences is presented in this paper. The framework is suitable to the circumstance where the cameras are static; the captured sequences contain moving objects but the initial segments of the sequences are frames of the static background. The framework first registers the static backgrounds of the sequences to yield the initial spatial transformation. Then it uses the correspondence of the centroids of moving objects to estimate the initial temporal transformation. Finally
mutual information is incorporated into this framework to compute the final temporal-spatial transformations. This framework
which can obtain a sub-pixel and sub-frame registration accuracy
has been successfully applied to a visible/infrared sequence alignment experiment.