Current Issue Cover
表情和姿态的双模态情感识别

闫静杰1, 郑文明2, 辛明海2, 邱伟2(1.东南大学信息科学与工程学院, 南京 210096;2.东南大学学习科学研究中心, 南京 210096)

摘 要
多模态情感识别是当前情感计算研究领域的重要内容,针对人脸表情和动作姿态开展双模态情感识别研究,提出一种基于双边稀疏偏最小二乘的表情和姿态的双模态情感识别方法。首先,从视频图像系列中分别提取表情和姿态两种模态的空时特征作为情感特征矢量。然后,通过双边稀疏偏最小二乘(BSPLS)的数据降维方法来进一步提取两组模态中的情感特征,并组合成新的情感特征向量。最后,采用了两种分类器来进行情感的分类识别。以国际上广泛采用的FABO表情和姿态的双模态情感数据库为实验数据,并与多种子空间方法(主成分分析、典型相关分析、偏最小二乘回归)进行对比实验来评估本文方法的识别性能。实验结果表明,两种模态融合后相比单模态更加有效,双边稀疏偏最小二乘(BSPLS)算法在几种方法中得到最高的情感识别率。
关键词
Bimodal emotion recognition based on body gesture and facial expression

Yan Jingjie1, Zheng Wenming2, Xin Minghai2, Qiu Wei2(1.School of Information Science and Engineering, Southeast University, Nanjing 210096, China;2.Research Center for Learning Science, Southeast University, Nanjing 210096, China)

Abstract
Multimodal emotion recognition has been a very important research topic in affect computing. This paper mainly focuses on the methods of bimodal emotion recognition based on body gesture and facial expression and presents a new bimodal emotion recognition method based on bilateral sparse partial least squares(BSPLS). First, the spatio-temporal feature is extracted as the emotion feature vector for video-based body gesture and facial expression respectively. Then we propose a new bilateral sparse partial least squares(BSPLS) method to extract emotion feature and fuse facial expression and body gestures as new emotion feature. Finally, we use two classifiers in emotional classification. We compared the BSPLS method with some subspace methods including PCA, CCA and PLSR based on the data from the FABO database. The experimental results show that the fusion feature methods are all better than the monomodal emotion recognition and our BSPLS feature fusion provides the best recognition performance.
Keywords

订阅号|日报