Current Issue Cover
双分支特征融合网络的步态识别算法

徐硕1, 郑锋2, 唐俊1, 鲍文霞1(1.安徽大学电子信息工程学院, 合肥 230601;2.南方科技大学工学院, 深圳 518055)

摘 要
目的 在步态识别算法中,基于外观的方法准确率高且易于实施,但对外观变化敏感;基于模型的方法对外观变化更加鲁棒,但建模困难且准确率较低。为了使步态识别算法在获得高准确率的同时对外观变化具有更好的鲁棒性,提出了一种双分支网络融合外观特征和姿态特征,以结合两种方法的优点。方法 双分支网络模型包含外观和姿态两条分支,外观分支采用GaitSet网络从轮廓图像中提取外观特征;姿态分支采用5层卷积网络从姿态骨架中提取姿态特征。在此基础上构建特征融合模块,融合外观特征和姿态特征,并引入通道注意力机制实现任意尺寸的特征融合,设计的模块结构使其能够在融合过程中抑制特征中的噪声。最后将融合后的步态特征应用于识别行人身份。结果 实验在CASIA-B (Institute of Automation,Chinese Academy of Sciences,Gait Dataset B)数据集上通过跨视角和不同行走状态两种实验设置与目前主流的步态识别算法进行对比,并以Rank-1准确率作为评价指标。在跨视角实验设置的MT (medium-sample training)划分中,该算法在3种行走状态下的准确率分别为93.4%、84.8%和70.9%,相比性能第2的算法分别提升了1.4%、0.5%和8.4%;在不同行走状态实验设置中,该算法在两种行走状态下的准确率分别为94.9%和90.0%,获得了最佳性能。结论 在能够同时获取外观数据和姿态数据的场景下,该算法能够有效地融合外观信息和姿态信息,在获得更丰富的步态特征的同时降低了外观变化对步态特征的影响,提高了步态识别的性能。
关键词
Dual branch feature fusion network based gait recognition algorithm

Xu Shuo1, Zheng Feng2, Tang Jun1, Bao Wenxia1(1.School of Electronics and Information Engineering, Anhui University, Hefei 230601, China;2.College of Engineering, Southern University of Science and Technology, Shenzhen 518055, China)

Abstract
Objective Gait is a kind of human walking pattern, which is one of the key biometric features for person identification. As a non-contact and long-distance recognition way to capture human identity information, gait recognition has been developed in video surveillance and public security. Gait recognition algorithms can be segmented into two mainstreams like appearance-based methods and the model-based methods. The appearance-based methods extract gait from a sequence of silhouette images in common. However, the appearance-based methods are basically affected by appearance changes like non-rigid clothing deformation and background clutters. Different from the appearance-based methods, the model-based methods commonly leverage body structure or motion prior to model gait pattern and more robust to appearance variations. Actually, it is challenged to identify a universal model for gait description, and the previous pre-defined models can be constrained in certain scenarios. Recent model-based methods are focused on deep learning-based pose estimation to model key-points of human body. But the estimated pose model constrains the redundant noises in subject to pose estimators and occlusion. In summary, the appearance-based methods are based visual features description while the model-based methods tend to describe a semantic level-based motion and structure. We aim to design a novel approach for gait recognition beyond the existed two methods mentioned above and improve gait recognition ability via the added appearance features and pose features. Method we design a dual-branch network for gait recognition. The input data are fed into a dual-branch network to extract appearance features and pose features each. Then, the two kinds of features are merged into the final gait features in the context of feature fusion module. In detail, we adopt an optimal network GaitSet as the appearance branch to extract appearance features from silhouette images and design a two-stream convolutional neural network (CNN) to extract pose features from pose key-points based on the position information and motion information. Meanwhile, a squeeze-and-excitation feature fusion module (SEFM) is designed to merge two kinds of features via the weights of two kinds of features learning. In the squeeze step, appearance feature maps and pose feature maps are integrated via pooling, concatenation, and projection. In the excitation step, we obtain the weighted feature maps of appearance and pose via projection and Hadamard product. The two kinds of feature maps are down-sampled and concatenated into the final gait feature in accordance with adaptive weighting. To verify the appearance features and pose features, we design two variants of SEFM in related to SEFM-A and SEFM-P further. The SEFM module merges appearance features and pose features in mutual; the SEFM-A module merges pose features into appearance features and appearance features remain unchanged; the SEFM-P module merges appearance features into pose features and no pose features changed. Our algorithm is based on Pytorch and the evaluation is carried out on database CASIA(Institute of Automation, Chinese Academy of Sciences) Gait Dataset B (CASIA-B). We adopt the AlphaPose algorithm to extract pose key-points from origin RGB videos, and use silhouette images obtained. In each iteration of the training process, we randomly select 16 subjects and select 8 random samples of each subject further. Every sample of them contains a sub-sequence of 30 frames. Consequently, each batch has 3 840 image-skeleton pairs. We adopt the Adam optimizer to optimize the network for 60 000 iterations. The initial learning rate is set to 0.000 2 for the pose branch, and 0.000 1 for the appearance branch and the SEFM, and then the learning rate is cut10 times at the 45 000-th iteration. Result We first verify the effectiveness of the dual-branch network and feature fusion modules. Our demonstration illustrates that our dual-branch network can enhance performance and there is a clear complementary effect between appearance features and pose features. The Rank-1 accuracies of five feature fusion modules like SEFM, SEFM-A, SEFM-P, Concatenation, and multi-modal transfer module (MMTM) are 83.5%, 81.9%, 93.4%, 92.6% and 79.5%, respectively. These results demonstrate that appearance features are more discriminative because there are noises existed in pose features. Our SEFM-P is capable to merge two features in the feature fusion procedure via noises suppression. Then, we compare our methods to advanced gait recognition methods like CNNs, event-based gait recognition(EV-Gait), GaitSet, and PoseGait. We conduct the experiments with two protocols and evaluate the rank-1 accuracy of three walking scenarios in the context of normal walking, bag-carrying, and coat-wearing. Our method archives the best performance in all experimental protocols. Our three scenarios-based rank-1 accuracies are reached 93.4%, 84.8%, and 70.9% in protocol 1. The results of protocol 2 are obtained by 95.7%, 87.8%, 77.0%, respectively. Comparing to the second-best method of GaitSet, the rank-1 accuracies in the context of coat-wearing walking scenario are improved by 8.4% and 6.6%. Conclusion We harness a novel gait recognition network based on the fusions of appearance features and pose features. Our analyzed results demonstrated that our method can develop two kinds of features and the appearance variations is more robust, especially for clothing changes scenario.
Keywords

订阅号|日报