Expression-invariant Face Recognition Based on Eigenmotion
YU Bing,JIN Lian fu,CHEN Ping()
The difficulty of a face recognition problem is to handle different types of variations, such as facial expression, illumination and pose. In order to improve the robustness of face recognition with respect to facial expression, this paper proposes a new approach, the eigenmotion based method, which is tolerant to large variations of facial expressions. In this new approach, first motion vectors are computed between a testing face image and a neutral training image using the block matching method, then projected to a low dimensional subspace that is pre trained by applying principal component analysis(PCA) to motion vectors resulting from training images with expression variations. This subspace is called an eigenmotion space. Finally the identification of the testing image is determined based on its residue to the eigenmotion space. Both the individual modeling method and the common modeling method are described in this paper. Experimental results show that the proposed eigenmotion based method outperforms the eigenface approach in the presence of facial expression variations. The approach can be extended to model other types of variations as well, for example, illumination and pose variations.