Current Issue Cover
融合局部特征的面部遮挡表情识别

王晓华1, 李瑞静1, 胡敏1, 任福继1,2(1.合肥工业大学计算机与信息学院, 情感计算与先进智能机器安徽省重点实验室, 合肥 230009;2.德岛大学, 先端技术科学教育部, 日本, 德岛 7708502)

摘 要
目的 针对人脸表情识别中存在局部遮挡的问题,提出一种融合局部特征的面部遮挡表情识别方法。方法 首先,为了减少噪声的影响,利用高斯滤波对归一化后的图像进行去噪处理;然后根据人脸不同部位对表情识别的不同贡献度,将图像划分为两个重要的子区域,并分别对该子区域进行不重叠分块处理;采用改进的中心对称局部二值模式(差值中心对称局部二值模式DCS-LBP)和改进的差值局部方向模式(梯度中心对称局部方向模式GCS-LDP)对各个子块提取相应的特征,并采用级联的方式得到图像的特征直方图;最后结合最近邻分类器对表情图像进行分类识别:利用卡方距离求取测试集图像与训练集图像特征直方图之间的距离,同时考虑到遮挡的干扰以及每个子块包含信息量的不同,利用信息熵对子块得到的卡方距离进行自适应加权。结果 在日本女性人脸表情库(JAFFE)和Cohn-Kanade(CK)人脸表情库上进行了3次交叉实验。在JAFFE库中随机遮挡、嘴部遮挡和眼部遮挡分别可以取得92.86%、94.76%和86.19%以上的平均识别率;在CK库中随机遮挡、嘴部遮挡和眼部遮挡分别可以取得99%、98.67%和99%以上的平均识别率。结论 该特征提取方法通过融合梯度方向上灰度值的差异以及梯度方向之间边缘响应值的差异来描述图像的特征,更加完整地提取了图像的细节信息。针对遮挡情况,本文采用的图像分割和信息熵自适应加权方法,有效地降低了遮挡对表情识别的干扰。在相同的实验环境下,与经典的局部特征提取方法以及遮挡问题处理方法的对比表明了该方法的有效性和优越性。
关键词
Occluded facial expression recognition based on the fusion of local features

Wang Xiaohua1, Li Ruijing1, Hu Min1, Ren Fuji1,2(1.School of Computer and Information of Hefei University of Technology, Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, Hefei 230009, China;2.University of Tokushima, Graduate School of Advanced Technology & Science, Tokushima 7708502, Japan)

Abstract
Objective To reduce the effect of partial occlusion in facial expression recognition, this paper proposes a new method of facial expression recognition based on local feature fusion. Method First, the normalized images are processed by the Gaussian filter to reduce the effect of noise. According to their different contributions in facial expression recognition, all the images are then divided into two main parts: near the eye and near the mouth. To analyze considerable structure detail, these two parts are further divided into several non-overlapping blocks. The following two patterns are used to extract the features of each sub block: the difference center-symmetric local binary pattern, which is the change of center-symmetric local binary pattern; and the gradient center-symmetric local directional pattern, which is the change of difference local directional pattern. The features are marked as two binary sequences, which are then cascaded to obtain the characteristic histogram of the sub block. The final histogram of the image is obtained by cascading the histogram of each sub block. Finally, the nearest neighbor method is used for classification. Chi-square distance is used to calculate the distance among the characteristic histograms of the testing and training images. Considering the difference of the amount of information contained in each sub block and to reduce the effect of occlusion further, information entropy is used to weigh chi-square distance adaptively. Result Three cross experiments are conducted on JAFFE and CK databases. The average recognition accuracies in random occlusion, mouth occlusion, and eye occlusion cases are 92.86%, 94.76%, and 86.19% on JAFFE database, and are 99%, 98.67%, and 99% on CK database. Conclusion In the aspect of feature extraction, our method describes the image from two aspects: one is the difference of the pixel values in the gradient direction, and the other is the difference of the edge response values between gradient directions. Accordingly, the image can be fully described. In the aspect of occlusion, image segmentation and information entropy are used to weigh chi-square distance adaptively. Thus, our method can effectively reduce the effect of occlusion. Under the same experimental conditions, experimental results show the effectiveness and superiority of the proposed method to other classical local feature extraction and occlusion handling methods.
Keywords

订阅号|日报