Current Issue Cover
小数据样本深度迁移网络自发表情分类

付晓峰, 吴俊, 牛力(杭州电子科技大学计算机学院, 杭州 310018)

摘 要
目的 相较于传统表情,自发表情更能揭示一个人的真实情感,在国家安防、医疗等领域有巨大的应用潜力。由于自发表情具有诱导困难、样本难以采集等特殊性,因此数据样本较少。为判别自发表情的种类,结合在越来越多的场景得到广泛应用的神经网络学习方法,提出基于深度迁移网络的表情种类判别方法。方法 为保留原始自发表情图片的特征,即使在小数据样本上也不使用数据增强技术,并将光流特征3维图像作为对比样本。将样本置入不同的迁移网络模型中进行训练,然后将经过训练的同结构的网络组合成同构网络并输出结果,从而实现自发表情种类的判别。结果 实验结果表明本文方法在不同数据库上均表现出优异的自发表情分类判别特性。在开放的自发表情数据库CASME、CASMEⅡ和CAS(ME)2上的测试平均准确率分别达到了94.3%、97.3%和97.2%,比目前最好测试结果高7%。结论 本文将迁移学习方法应用于自发表情种类的判别,并对不同网络模型以及不同种类的样本进行比较,取得了目前最优的自发表情种类判别的平均准确率。
关键词
Classification of small spontaneous expression database based on deep transfer learning network

Fu Xiaofeng, Wu Jun, Niu Li(School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China)

Abstract
Objective Expression is important in human-computer interaction. As a special expression, spontaneous expression features shorter duration and weaker intensity in comparison with traditional expressions. Spontaneous expressions can reveal a person's true emotions and present immense potential in detection, anti-detection, and medical diagnosis. Therefore, identifying the categories of spontaneous expression can make human-computer interaction smooth and fundamentally change the relationship between people and computers. Given that spontaneous expressions are difficult to be induced and collected, the scale of a spontaneous expression dataset is relatively small for training a new deep neural network. Only ten thousand spontaneous samples are present in each database. The convolutional neural network shows excellent performance and is thus widely used in a large number of scenes. For instance, the approach is better than the traditional feature extraction method in the aspect of improving the accuracy of discriminating the categories of spontaneous expression. Method This study proposes a method on the basis of different deep transfer network models for discriminating the categories of spontaneous expression. To preserve the characteristics of the original spontaneous expression, we do not use the technique of data enhancement to reduce the risk of convergence. At the same time, training samples, which comprise three-dimensional images that are composed of optical flow and grayscale images, are compared with the original RGB images. The three-dimensional image contains spatial information and temporal displacement information. In this study, we compare three network models with different samples. The first model is based on Alexnet that only changes the number of output layer neurons that is equal to the number of categories of spontaneous expression. Then, the network is fine-tuned to obtain the best training and testing results by fixing the parameters of different layers several times. The second model is based on InceptionV3. Two fully connected layers whose neuron numbers are equal to 512 and the number of spontaneous expression categories, respectively, are added to the output results. Thus, we only need to fine-tune the parameters of the two layers. Network depth increases with a reduction of the number of parameters due to the 3×3 convolution kernel replacing the 7×7 convolution kernel. The third model is based on Inception-ResNet-v2. Similar to the first model, we only change the number of output layer neurons. Finally, the isomorphic network model is proposed to identify the categories of spontaneous expression. The model is composed of two transfer learning networks of the same type that are trained by different samples and then takes the maximum as the final output. The isomorphic network makes decisions with high accuracy because the same output of the isomorphic network is infinitely close to the standard answer. From the perspective of probability, we take the maximum of different outputs as a prediction value. Result Experimental results indicate that the proposed method exhibits excellent classification performance on different samples. The single network output clearly shows that the features extracted from RGB images are as effective as the features extracted from the three-dimensional images of optical flow. This result indicates that spatiotemporal features extracted by the optical flow method can be replaced by features that are extracted from the deep neural network. Simultaneously, the method shows that at a certain degree, features extracted from the neural network can replace the lost information and features, such as the temporal features of RGB images or color features of OF+ images. The high average accuracy of a single network indicates that it has good testing performance on each dataset. Networks with high complexity perform well because the samples of spontaneous expression can train the deep transfer learning network effectively. The proposed models achieve state-of-the-art performance and an average accuracy of over 96%. After analyzing the result of the isomorphic network model, we know that its expression is not better than that of a single network in some cases because a single network has a high confidence degree in discriminating the categories of spontaneous expression and thus, the isomorphic network cannot easily improve the average accuracy. The Titan Xp used for this research was donated by the NVIDIA Corporation. Conclusion Compared with traditional expression, spontaneous expression is able to change subtly and extract features in a difficult manner. In the study, different transfer learning networks are applied to discriminate the categories of spontaneous expression. Concurrently, the testing accuracies of different networks, which are trained by different kinds of samples, are compared. Experimental results show that in contrast to traditional methods, deep learning has obvious advantages in spontaneous expression feature extraction. The findings also prove that deep network can extract complete features from spontaneous expression and that it is robust on different databases because of its good testing results. In the future, we will extract spontaneous expressions directly from videos and identify the categories of spontaneous expression with high accuracy by removing distracting occurrences, such as blinking.
Keywords

订阅号|日报