跨视角步态识别综述
Cross-view gait recognition: a review
- 2023年28卷第5期 页码:1265-1286
纸质出版日期: 2023-05-16
DOI: 10.11834/jig.220458
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2023-05-16 ,
移动端阅览
许文正, 黄天欢, 贲晛烨, 曾翌, 张军平. 2023. 跨视角步态识别综述. 中国图象图形学报, 28(05):1265-1286
Xu Wenzheng, Huang Tianhuan, Ben Xianye, Zeng Yi, Zhang Junping. 2023. Cross-view gait recognition: a review. Journal of Image and Graphics, 28(05):1265-1286
步态识别具有对图像分辨率要求低、可远距离识别、无需受试者合作、难以隐藏或伪装等优势,在安防监控和调查取证等领域有着广阔的应用前景。然而在实际应用中,步态识别的性能常受到视角、着装、携物和遮挡等协变量的影响,其中视角变化最为普遍,并且会使行人的外观发生显著改变。因此,提高步态识别对视角的鲁棒性一直是该领域的研究热点。为了全面认识现有的跨视角步态识别方法,本文对相关研究工作进行了梳理和综述。首先,从基本概念、数据采集方式和发展历程等角度简要介绍了该领域的研究背景,在此基础上,整理并分析了基于视频的主流跨视角步态数据库;然后,从基于3维步态信息的识别方法、基于视角转换模型的识别方法、基于视角不变特征的识别方法和基于深度学习的识别方法4个方面详细介绍了跨视角步态识别方法。最后,在CASIA-B(CASIA gait database, dataset B)、OU-ISIR LP(OU-ISIR gait database, large population dataset)和OU-MVLP(OU-ISIR gait database, multi-view large population dataset)3个数据库上对该领域代表性方法的性能进行了对比分析,并指出跨视角步态识别的未来研究方向。
Gait recognition is inter-related to pedestrians’ identity. Pedestrians’ gait recognition can be focused on at a distance and it cannot require special acquisition equipment, high image resolution, or explicit cooperation from the person in comparison with recognition methods relevant to the features of face, fingerprint, iris and other biometrics. Moreover, one’s gait is difficult to be hidden or disguised. Gait recognition has a wide range of applications in public surveillance, forensic collection, and daily attendance. In these practical applications, the performance of gait recognition is easily affected by covariates such as viewpoint variations, occlusions, and segmentation error, among which viewpoint variations are one of the main factors affecting the gait recognition performance. The intra-class differences of different viewpoints are often greater than the inter-class differences of the same viewpoint. Therefore, improving the robustness of cross-view gait recognition has become a hot topic. A review of existing cross-view gait recognition methods is critical analyzed. First, current situation is introduced in related to basic concepts, data acquisition methods, application scenarios, and its growing paths. Then, we review video-based cross-view gait recognition methods further. Cross-view gait databases are analyzed in the context of 1) data type, 2) sample size, 3) viewpoint number, 4) acquisition environment, 5) other related covariates, and 6) the characteristics of these databases in detail. Then, cross-view gait classification methods are presented in detail. Unlike most existing reviews that classify gait recognition methods by the basic steps such as data acquisition, feature representation, and classification, we focus on cross-view recognition problems. Specifically, four cross-view gait recognition methods are analyzed on the basis of feature representation and classification (i.e., 3D gait information construction, view transformation model (VTM), view-invariant feature extraction, and the deep learning-based methods). For 3D gait information methods, gait information is extracted from multi-view gait videos and it is used to construct 3D gait models. These methods have good robustness to large view changes, but they often require: complex configurations, expensive high-resolution multi-camera systems, and frame synchronization. All of them limit their application to real surveillance scenarios. For VTM methods, singular value decomposition (SVD) and regression-derived view transformation models are introduced to local and global features. The discriminative analysis can be ignored although the VTM may minimize the error between the transformed gait features and the original gait features. For view-invariant feature extraction methods, 1) manual feature extraction, 2) discriminative subspace learning, and 3) metric learning are compared. Among the discriminative subspace learning methods, the canonical correlation analysis (CCA) based methods are highlighted. Despite the advantages of these methods, it is still challenged to sort robust view-invariant subspace or metric for features out. Deep learning based methods for cross-view recognition is mainly composed of convolution neural network (CNN), recurrent neural network (RNN), auto encoder (AE), generative adversarial network (GAN), 3D convolutional neural network (3D CNN), and graph convolutional network (GCN). To summary the potentials of multiple cross-view gait recognition methods, some representative state-of-the-art methods are compared and analyzed further on CASIA-B(CASIA gait database, dataset B), OU-ISIR LP(OU-ISIR gait database, large population dataset) and OU-MVLP(OU-ISIR gait database multi-view large population dataset) databases. It is found that the methods using 3D CNN or multiple neural network architectures, which represent gait features with a sequence of silhouettes, achieve good performance. Additionally, deep neural network methods based on body model representation also show excellent performance under the condition with only view variations. Finally, future research directions are predicted for cross-view gait recognition, including 1) the establishment of large-scale gait databases containing complex covariates, 2) cross-database gait recognition, 3) self-supervised learning methods for gait features, 4) disentangled representation learning methods for gait features, 5) further developing model-based gait representation methods, 6) exploring new methods for temporal feature extraction, 7) multimodal fusion gait recognition, and 8) improving the security of gait recognition systems.
计算机视觉生物特征识别步态识别跨视角机器学习深度学习神经网络
computer visionbiometric recognitiongait recognitioncross-viewmachine learningdeep learningneural network
An W Z, Yu S Q, Makihara Y, Wu X H, Xu C, Yu Y, Liao R J and Yagi Y. 2020. Performance evaluation of model-based gait on multi-view very large population database with pose sequences. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2(4): 421-430 [DOI: 10.1109/TBIOM.2020.3008862http://dx.doi.org/10.1109/TBIOM.2020.3008862]
Bashir K, Xiang T and Gong S G. 2010. Cross-view gait recognition using correlation strength//Proceedings of 2010 British Machine Vision Conference. Aberystwyth, UK: BMVA: 1-11 [DOI: 10.5244/C.24.109http://dx.doi.org/10.5244/C.24.109]
Ben X Y, Gong C, Zhang P, Jia X T, Wu Q and Meng W X. 2019a. Coupled patch alignment for matching cross-view gaits. IEEE Transactions on Image Processing, 28(6): 3142-3157 [DOI: 10.1109/TIP.2019.2894362http://dx.doi.org/10.1109/TIP.2019.2894362]
Ben X Y, Gong C, Zhang P, Yan R, Wu Q and Meng W X. 2020. Coupled bilinear discriminant projection for cross-view gait recognition. IEEE Transactions on Circuits and Systems for Video Technology, 30(3): 734-747 [DOI: 10.1109/TCSVT.2019.2893736http://dx.doi.org/10.1109/TCSVT.2019.2893736]
Ben X Y, Xu S and Wang K J. 2012. Review on pedestrian gait feature expression and recognition. Pattern Recognition and Artificial Intelligence, 25(1): 71-81
贲晛烨, 徐森, 王科俊. 2012. 行人步态的特征表达及识别综述. 模式识别与人工智能, 25(1): 71-81 [DOI: 10.16451/j.cnki.issn1003-6059.2012.01.010http://dx.doi.org/10.16451/j.cnki.issn1003-6059.2012.01.010]
Ben X Y, Zhang P, Lai Z H, Yan R, Zhai X L and Meng W X. 2019b. A general tensor representation framework for cross-view gait recognition. Pattern Recognition, 90: 87-98 [DOI: 10.1016/j.patcog.2019.01.017http://dx.doi.org/10.1016/j.patcog.2019.01.017]
Bobick A F and Johnson A Y. 2001. Gait recognition using static, activity-specific parameters//Proceedings of 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Kauai, USA: IEEE: 423-430 [DOI: 10.1109/CVPR.2001.990506http://dx.doi.org/10.1109/CVPR.2001.990506]
Chao H Q, Wang K, He Y W, Zhang J P and Feng J F. 2022. GaitSet: cross-view gait recognition through utilizing gait as a deep set. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7): 3467-3478 [DOI: 10.1109/TPAMI.2021.3057879http://dx.doi.org/10.1109/TPAMI.2021.3057879]
Chen X, Luo X Z, Weng J, Luo W Q, Li H T and Tian Q. 2021. Multi-view gait image generation for cross-view gait recognition. IEEE Transactions on Image Processing, 30: 3041-3055 [DOI: 10.1109/TIP.2021.3055936http://dx.doi.org/10.1109/TIP.2021.3055936]
Cheng M H, Ho M F and Huang C L. 2008. Gait analysis for human identification through manifold learning and HMM. Pattern Recognition, 41(8): 2541-2553 [DOI: 10.1016/j.patcog.2007.11.021http://dx.doi.org/10.1016/j.patcog.2007.11.021]
Connor P and Ross A. 2018. Biometric recognition by gait: a survey of modalities and features. Computer Vision and Image Understanding, 167: 1-27 [DOI: 10.1016/j.cviu.2018.01.007http://dx.doi.org/10.1016/j.cviu.2018.01.007]
Fan C, Peng Y J, Cao C S, Liu X, Hou S H, Chi J N, Huang Y Z, Li Q and He Z Q. 2020. GaitPart: temporal part-based model for gait recognition//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 14213-14221 [DOI: 10.1109/CVPR42600.2020.01423http://dx.doi.org/10.1109/CVPR42600.2020.01423]
Goffredo M, Bouchrika I, Carter J N and Nixon M S. 2010. Self-calibrating view-invariant gait biometrics. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40(4): 997-1008 [DOI: 10.1109/TSMCB.2009.2031091http://dx.doi.org/10.1109/TSMCB.2009.2031091]
Grauman K, Shakhnarovich G and Darrell T. 2003. A Bayesian approach to image-based visual hull reconstruction//Proceedings of 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, USA: IEEE: 187-194 [DOI: 10.1109/CVPR.2003.1211353http://dx.doi.org/10.1109/CVPR.2003.1211353]
Gu J X, Ding X Q, Wang S J and Wu Y S. 2010. Action and gait recognition from recovered 3-D human joints. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40(4): 1021-1033 [DOI: 10.1109/TSMCB.2010.2043526http://dx.doi.org/10.1109/TSMCB.2010.2043526]
Han F, Li X J, Zhao J and Shen F R. 2022. A unified perspective of classification-based loss and distance-based loss for cross-view gait recognition. Pattern Recognition, 125: #108519 [DOI: 10.1016/j.patcog.2021.108519http://dx.doi.org/10.1016/j.patcog.2021.108519]
Han J and Bhanu B. 2006. Individual recognition using gait energy image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(2): 316-322 [DOI: 10.1109/TPAMI.2006.38http://dx.doi.org/10.1109/TPAMI.2006.38]
He Y W and Zhang J P. 2018. Deep learning for gait recognition: a survey. Pattern Recognition and Artificial Intelligence, 31(5): 442-452
何逸炜, 张军平. 2018. 步态识别的深度学习: 综述. 模式识别与人工智能, 31(5): 442-452 [DOI: 10.16451/j.cnki.issn1003-6059.201805006http://dx.doi.org/10.16451/j.cnki.issn1003-6059.201805006]
He Y W, Zhang J P, Shan H M and Wang L. 2019. Multi-task GANs for view-specific feature learning in gait recognition. IEEE Transactions on Information Forensics and Security, 14(1): 102-113 [DOI: 10.1109/TIFS.2018.2844819http://dx.doi.org/10.1109/TIFS.2018.2844819]
Hossain A, Makihara Y, Wang J Q and Yagi Y. 2010. Clothing-invariant gait identification using part-based clothing categorization and adaptive weight control. Pattern Recognition, 43(6): 2281-2291 [DOI: 10.1016/j.patcog.2009.12.020http://dx.doi.org/10.1016/j.patcog.2009.12.020]
Hou S H, Cao C S, Liu X and Huang Y Z. 2020. Gait lateral network: learning discriminative and compact representations for gait recognition//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer: 382-398 [DOI: 10.1007/978-3-030-58545-7_22http://dx.doi.org/10.1007/978-3-030-58545-7_22]
Hou S H, Liu X, Cao C S and Huang Y Z. 2022. Gait quality aware network: toward the interpretability of silhouette-based gait recognition. IEEE Transactions on Neural Networks and Learning Systems [DOI: 10.1109/TNNLS.2022.3154723http://dx.doi.org/10.1109/TNNLS.2022.3154723]
Hu H F. 2013. Enhanced Gabor feature based classification using a regularized locally tensor discriminant model for multiview gait recognition. IEEE Transactions on Circuits and Systems for Video Technology, 23(7): 1274-1286 [DOI: 10.1109/TCSVT.2013.2242640http://dx.doi.org/10.1109/TCSVT.2013.2242640]
Hu H F. 2014. Multiview gait recognition based on patch distribution features and uncorrelated multilinear sparse local discriminant canonical correlation analysis. IEEE Transactions on Circuits and Systems for Video Technology, 24(4): 617-630 [DOI: 10.1109/TCSVT.2013.2280098http://dx.doi.org/10.1109/TCSVT.2013.2280098]
Hu M D, Wang Y H, Zhang Z X, Little J J and Huang D. 2013. View-invariant discriminative projection for multi-view gait-based human identification. IEEE Transactions on Information Forensics and Security, 8(12): 2034-2045 [DOI: 10.1109/TIFS.2013.2287605http://dx.doi.org/10.1109/TIFS.2013.2287605]
Huang T H, Ben X Y, Gong C, Zhang B C, Yan R and Wu Q. 2022. Enhanced spatial-temporal salience for cross-view gait recognition. IEEE Transactions on Circuits and Systems for Video Technology, 32(10): 6967-6980 [DOI: 10.1109/TCSVT.2022.3175959http://dx.doi.org/10.1109/TCSVT.2022.3175959]
Huang X H, Zhu D W, Wang H, Wang X G, Yang B, He B T, Liu W Y and Feng B. 2021a. Context-sensitive temporal feature learning for gait recognition//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 12889-12898 [DOI: 10.1109/ICCV48922.2021.01267http://dx.doi.org/10.1109/ICCV48922.2021.01267]
Huang Z, Xue D X, Shen X, Tian X M, Li H Q, Huang J Q and Hua X S. 2021b. 3D local convolutional neural networks for gait recognition//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14900-14909 [DOI: 10.1109/ICCV48922.2021.01465http://dx.doi.org/10.1109/ICCV48922.2021.01465]
Iwama H, Okumura M, Makihara Y and Yagi Y. 2012. The OU-ISIR gait database comprising the large population dataset and performance evaluation of gait recognition. IEEE Transactions on Information Forensics and Security, 7(5): 1511-1521 [DOI: 10.1109/TIFS.2012.2204253http://dx.doi.org/10.1109/TIFS.2012.2204253]
Jean F, Albu A B and Bergevin R. 2009. Towards view-invariant gait modeling: computing view-normalized body part trajectories. Pattern Recognition, 42(11): 2936-2949 [DOI: 10.1016/j.patcog.2009.05.006http://dx.doi.org/10.1016/j.patcog.2009.05.006]
Kusakunniran W. 2014. Recognizing gaits on spatio-temporal feature domain. IEEE Transactions on Information Forensics and Security, 9(9): 1416-1423 [DOI: 10.1109/TIFS.2014.2336379http://dx.doi.org/10.1109/TIFS.2014.2336379]
Kusakunniran W, Wu Q, Zhang J and Li H D. 2010. Support vector regression for multi-view gait recognition based on local motion feature selection//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE: 974-981 [DOI: 10.1109/CVPR.2010.5540113http://dx.doi.org/10.1109/CVPR.2010.5540113]
Kusakunniran W, Wu Q, Zhang J and Li H D. 2012. Gait recognition under various viewing angles based on correlated motion regression. IEEE Transactions on Circuits and Systems for Video Technology, 22(6): 966-980 [DOI: 10.1109/TCSVT.2012.2186744http://dx.doi.org/10.1109/TCSVT.2012.2186744]
Kusakunniran W, Wu Q, Zhang J, Li H D and Wang L. 2014. Recognizing gaits across views through correlated motion co-clustering. IEEE Transactions on Image Processing, 23(2): 696-709 [DOI: 10.1109/TIP.2013.2294552http://dx.doi.org/10.1109/TIP.2013.2294552]
Kusakunniran W, Wu Q, Zhang J, Ma Y and Li H D. 2013. A new view-invariant feature for cross-view gait recognition. IEEE Transactions on Information Forensics and Security, 8(10): 1642-1653 [DOI: 10.1109/TIFS.2013.2252342http://dx.doi.org/10.1109/TIFS.2013.2252342]
Li H K, Qiu Y D, Zhao H M, Zhan J, Chen R J, Wei T J and Huang Z H. 2022. GaitSlice: a gait recognition model based on spatio-temporal slice features. Pattern Recognition, 124: #108453 [DOI: 10.1016/j.patcog.2021.108453http://dx.doi.org/10.1016/j.patcog.2021.108453]
Li N and Zhao X B. 2022. A strong and robust skeleton-based gait recognition method with gait periodicity priors. IEEE Transactions on Multimedia [DOI: 10.1109/TMM.2022.3154609http://dx.doi.org/10.1109/TMM.2022.3154609]
Li N, Zhao X B and Ma C. 2020a. JointsGait: a model-based gait recognition method based on gait graph convolutional networks and joints relationship pyramid mapping [EB/OL]. [2022-04-15]. https://arxiv.org/pdf/2005.08625.pdfhttps://arxiv.org/pdf/2005.08625.pdf
Li S Q, Liu W and Ma H D. 2019. Attentive spatial–temporal summary networks for feature learning in irregular gait recognition. IEEE Transactions on Multimedia, 21(9): 2361-2375 [DOI: 10.1109/TMM.2019.2900134http://dx.doi.org/10.1109/TMM.2019.2900134]
Li X, Makihara Y, Xu C, Yagi Y and Ren M W. 2020b. Gait recognition via semi-supervised disentangled representation learning to identity and covariate features//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 13306-13316 [DOI: 10.1109/CVPR42600.2020.01332http://dx.doi.org/10.1109/CVPR42600.2020.01332]
Li X, Makihara Y, Xu C, Yagi Y, Yu S Q and Ren M W. 2020c. End-to-end model-based gait recognition//Proceedings of the 15th Asian Conference on Computer Vision. Kyoto, Japan: Springer: 3-20 [DOI: 10.1007/978-3-030-69535-4_1http://dx.doi.org/10.1007/978-3-030-69535-4_1]
Liao R J, Yu S Q, An W Z and Huang Y Z. 2020. A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognition, 98: #107069 [DOI: 10.1016/j.patcog.2019.107069http://dx.doi.org/10.1016/j.patcog.2019.107069]
Lin B B, Zhang S L and Bao F. 2020. Gait recognition with multiple-temporal-scale 3D convolutional neural network//Proceedings of the 28th ACM International conference on Multimedia. Seattle, USA: Association for Computing Machinery: 3054-3062 [DOI: 10.1145/3394171.3413861http://dx.doi.org/10.1145/3394171.3413861]
Lin B B, Zhang S L and Yu X. 2021. Gait recognition via effective global-local feature representation and local temporal aggregation//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 14628-14636 [DOI: 10.1109/ICCV48922.2021.01438http://dx.doi.org/10.1109/ICCV48922.2021.01438]
Liu X K, You Z Y, He Y X, Bi S and Wang J. 2022. Symmetry-driven hyper feature GCN for skeleton-based gait recognition. Pattern Recognition, 125: #108520 [DOI: 10.1016/j.patcog.2022.108520http://dx.doi.org/10.1016/j.patcog.2022.108520]
Lu J W, Wang G and Moulin P. 2014. Human identity and gender recognition from gait sequences with arbitrary walking directions. IEEE Transactions on Information Forensics and Security, 9(1): 51-61 [DOI: 10.1109/TIFS.2013.2291969http://dx.doi.org/10.1109/TIFS.2013.2291969]
Luo J, Tang J, Tjahjadi T and Xiao X M. 2016. Robust arbitrary view gait recognition based on parametric 3D human body reconstruction and virtual posture synthesis. Pattern Recognition, 60: 361-377 [DOI: 10.1016/j.patcog.2016.05.030http://dx.doi.org/10.1016/j.patcog.2016.05.030]
Makihara Y, Mannami H and Yagi Y. 2010. Gait analysis of gender and age using a large-scale multi-view gait database//Proceedings of the 10th Asian Conference on Computer Vision. Queenstown, New Zealand: Springer: 440-451 [DOI: 10.1007/978-3-642-19309-5_34http://dx.doi.org/10.1007/978-3-642-19309-5_34]
Makihara Y, Sagawa R, Mukaigawa Y, Echigo T and Yagi Y. 2006. Gait recognition using a view transformation model in the frequency domain//Proceedings of the 9th European Conference on Computer Vision. Graz, Austria: Springer: 151-163 [DOI: 10.1007/11744078_12http://dx.doi.org/10.1007/11744078_12]
Martín-Félez R and Xiang T. 2012. Gait recognition by ranking//Proceedings of the 12th European Conference on Computer Vision. Florence, Italy: Springer: 328-341 [DOI: 10.1007/978-3-642-33718-5_24http://dx.doi.org/10.1007/978-3-642-33718-5_24]
Muramatsu D, Makihara Y and Yagi Y. 2016. View transformation model incorporating quality measures for cross-view gait recognition. IEEE Transactions on Cybernetics, 46(7): 1602-1615 [DOI: 10.1109/TCYB.2015.2452577http://dx.doi.org/10.1109/TCYB.2015.2452577]
Muramatsu D, Shiraishi A, Makihara Y, Uddin Z and Yagi Y. 2015. Gait-based person recognition using arbitrary view transformation model. IEEE Transactions on Image Processing, 24(1): 140-154 [DOI: 10.1109/TIP.2014.2371335http://dx.doi.org/10.1109/TIP.2014.2371335]
Nambiar A, Bernardino A and Nascimento J C. 2020. Gait-based person re-identification: a survey. ACM Computing Surveys, 52(2): #33 [DOI: 10.1145/3243043http://dx.doi.org/10.1145/3243043]
Niyogi S A and Adelson E H. 1994. Analyzing and recognizing walking figures in XYT//Proceedings of 1994 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 469-474 [DOI: 10.1109/CVPR.1994.323868http://dx.doi.org/10.1109/CVPR.1994.323868]
Qin H, Chen Z X, Guo Q Q, Wu Q M J and Lu M X. 2022. RPNet: gait recognition with relationships between each body-parts. IEEE Transactions on Circuits and Systems for Video Technology, 32(5): 2990-3000 [DOI: 10.1109/TCSVT.2021.3095290http://dx.doi.org/10.1109/TCSVT.2021.3095290]
Rogez G, Rihan J, Guerrero J J and Orrite C. 2014. Monocular 3-D gait tracking in surveillance scenes. IEEE Transactions on Cybernetics, 44(6): 894-909 [DOI: 10.1109/TCYB.2013.2275731http://dx.doi.org/10.1109/TCYB.2013.2275731]
Santos C F G D, De Souza Oliveira D, Passos L A, Pires R G, Santos D F S, Valem L P, Moreira T P, Santana M C S, Roder M, Papa J P and Colombo D. 2023. Gait recognition based on deep learning: a survey. ACM Computing Surveys, 55(2): #34 [DOI: 10.1145/3490235http://dx.doi.org/10.1145/3490235]
Sarkar S, Phillips P J, Liu Z, Vega I R, Grother P and Bowyer K W. 2005. The humanID gait challenge problem: data sets, performance, and analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2): 162-177 [DOI: 10.1109/TPAMI.2005.39http://dx.doi.org/10.1109/TPAMI.2005.39]
Sepas-Moghaddam A and Etemad A. 2021. View-invariant gait recognition with attentive recurrent learning of partial representations. IEEE Transactions on Biometrics, Behavior, and Identity Science, 3(1): 124-137 [DOI: 10.1109/TBIOM.2020.3031470http://dx.doi.org/10.1109/TBIOM.2020.3031470]
Sepas-Moghaddam A and Etemad A. 2023. Deep gait recognition: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1): 264-284 [DOI: 10.1109/TPAMI.2022.3151865http://dx.doi.org/10.1109/TPAMI.2022.3151865]
Shakhnarovich G, Lee L and Darrell T. 2001. Integrated face and gait recognition from multiple views//Proceedings of 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Kauai, USA: IEEE: 439-446 [DOI: 10.1109/CVPR.2001.990508http://dx.doi.org/10.1109/CVPR.2001.990508]
Song C F, Huang Y Z, Huang Y, Jia N and Wang L. 2019. Gaitnet: an end-to-end network for gait based human identification. Pattern Recognition, 96: #106988 [DOI: 10.1016/j.patcog.2019.106988http://dx.doi.org/10.1016/j.patcog.2019.106988]
Song C F, Huang Y Z, Wang W N and Wang L. 2022. CASIA-E: a large comprehensive dataset for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 45(3): 2801-2815 [DOI: 10.1109/TPAMI.2022.3183288http://dx.doi.org/10.1109/TPAMI.2022.3183288]
Takemura N, Makihara Y, Muramatsu D, Echigo T and Yagi Y. 2018. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Transactions on Computer Vision and Applications, 10(1): #4 [DOI: 10.1186/s41074-018-0039-6http://dx.doi.org/10.1186/s41074-018-0039-6]
Takemura N, Makihara Y, Muramatsu D, Echigo T and Yagi Y. 2019. On input/output architectures for convolutional neural network-based cross-view gait recognition. IEEE Transactions on Circuits and Systems for Video Technology, 29(9): 2708-2719 [DOI: 10.1109/TCSVT.2017.2760835http://dx.doi.org/10.1109/TCSVT.2017.2760835]
Tang J, Luo J, Tjahjadi T and Guo F. 2017. Robust arbitrary-view gait recognition based on 3D partial similarity matching. IEEE Transactions on Image Processing, 26(1): 7-22 [DOI: 10.1109/TIP.2016.2612823http://dx.doi.org/10.1109/TIP.2016.2612823]
Tsuji A, Makihara Y and Yagi Y. 2010. Silhouette transformation based on walking speed for gait identification//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE: 717-722 [DOI: 10.1109/CVPR.2010.5540144http://dx.doi.org/10.1109/CVPR.2010.5540144]
Wan C S, Wang L and Phoha V V. 2019. A survey on gait recognition. ACM Computing Surveys, 51(5): #89 [DOI: 10.1145/3230633http://dx.doi.org/10.1145/3230633]
Wang K, Lei Y M and Zhang J P. 2020. Two-stream gait network for cross-view gait recognition. Pattern Recognition and Artificial Intelligence, 33(5): 383-392
汪堃, 雷一鸣, 张军平. 2020. 基于双流步态网络的跨视角步态识别. 模式识别与人工智能, 33(5): 383-392 [DOI: 10.16451/j.cnki.issn1003-6059.202005001http://dx.doi.org/10.16451/j.cnki.issn1003-6059.202005001]
Wang K J, Ding X N, Xing X L and Liu M C. 2019. A survey of multi-view gait recognition. Acta Automatica Sinica, 45(5): 841-852
王科俊, 丁欣楠, 邢向磊, 刘美辰. 2019. 多视角步态识别综述. 自动化学报, 45(5): 841-852 [DOI: 10.16383/j.aas.2018.c170559http://dx.doi.org/10.16383/j.aas.2018.c170559]
Wang L, Tan T N, Ning H Z and Hu W M. 2003. Silhouette analysis-based gait recognition for human identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12): 1505-1518 [DOI: 10.1109/TPAMI.2003.1251144http://dx.doi.org/10.1109/TPAMI.2003.1251144]
Wu H Q, Tian J, Fu Y J, Li B and Li X. 2021. Condition-aware comparison scheme for gait recognition. IEEE Transactions on Image Processing, 30: 2734-2744 [DOI: 10.1109/TIP.2020.3039888http://dx.doi.org/10.1109/TIP.2020.3039888]
Wu Z F, Huang Y Z and Wang L. 2015. Learning representative deep features for image set analysis. IEEE Transactions on Multimedia, 17(11): 1960-1968 [DOI: 10.1109/TMM.2015.2477681http://dx.doi.org/10.1109/TMM.2015.2477681]
Wu Z F, Huang Y Z, Wang L, Wang X G and Tan T N. 2017. A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(2): 209-226 [DOI: 10.1109/TPAMI.2016.2545669http://dx.doi.org/10.1109/TPAMI.2016.2545669]
Xing X L, Wang K J, Yan T and Lyu Z W. 2016. Complete canonical correlation analysis with application to multi-view gait recognition. Pattern Recognition, 50: 107-117 [DOI: 10.1016/j.patcog.2015.08.011http://dx.doi.org/10.1016/j.patcog.2015.08.011]
Xu C, Makihara Y, Li X, Yagi Y and Lu J F. 2021. Cross-view gait recognition using pairwise spatial transformer networks. IEEE Transactions on Circuits and Systems for Video Technology, 31(1): 260-274 [DOI: 10.1109/TCSVT.2020.2975671http://dx.doi.org/10.1109/TCSVT.2020.2975671]
Xu K, Jiang X H and Sun T F. 2022. Gait recognition based on local graphical skeleton descriptor with pairwise similarity network. IEEE Transactions on Multimedia, 24: 3265-3275 [DOI: 10.1109/TMM.2021.3095809http://dx.doi.org/10.1109/TMM.2021.3095809]
Xu S, Zheng F, Tang J and Bao W X. 2022. Dual branch feature fusion network based gait recognition algorithm. Journal of Image and Graphics, 27(7): 2263-2273
徐硕, 郑锋, 唐俊, 鲍文霞. 2022. 双分支特征融合网络的步态识别算法. 中国图象图形学报, 27(7): 2263-2273 [DOI: 10.11834/jig.200730http://dx.doi.org/10.11834/jig.200730]
Yu S Q, Liao R J, An W Z, Chen H F, García E B, Huang Y Z and Poh N. 2019. GaitGANv2: invariant gait feature extraction using generative adversarial networks. Pattern Recognition, 87: 179-189 [DOI: 10.1016/j.patcog.2018.10.019http://dx.doi.org/10.1016/j.patcog.2018.10.019]
Yu S Q, Tan D L and Tan T N. 2006. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition//Proceedings of the 18th International Conference on Pattern Recognition. Hong Kong, China: IEEE: 441-444 [DOI: 10.1109/ICPR.2006.67http://dx.doi.org/10.1109/ICPR.2006.67]
Zhai X L, Ben X Y, Liu C and Xie T X. 2022. Decomposing identity and view for cross-view gait recognition//Proceedings of 2022 IEEE International Conference on Multimedia and Expo. Taipei, China: IEEE: 1-6 [DOI: 10.1109/ICME52920.2022.9859981http://dx.doi.org/10.1109/ICME52920.2022.9859981]
Zhang H Y and Bao W J. 2022. The cross-view gait recognition analysis based on generative adversarial networks derived of self-attention mechanism. Journal of Image and Graphics, 27(4): 1097-1109
张红颖, 包雯静. 2022. 融合自注意力机制的生成对抗网络跨视角步态识别. 中国图象图形学报, 27(4): 1097-1109 [DOI: 10.11834/jig.200482http://dx.doi.org/10.11834/jig.200482]
Zhang S X, Wang Y H and Li A N. 2021. Cross-view gait recognition with deep universal linear embeddings//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 9091-9100 [DOI: 10.1109/CVPR46437.2021.00898http://dx.doi.org/10.1109/CVPR46437.2021.00898]
Zhang Y Q, Huang Y Z, Wang L and Yu S Q. 2019a. A comprehensive study on gait biometrics using a joint CNN-based method. Pattern Recognition, 93: 228-236 [DOI: 10.1016/j.patcog.2019.04.023http://dx.doi.org/10.1016/j.patcog.2019.04.023]
Zhang Y Q, Huang Y Z, Yu S Q and Wang L. 2020. Cross-view gait recognition by discriminative feature learning. IEEE Transactions on Image Processing, 29: 1001-1015 [DOI: 10.1109/TIP.2019.2926208http://dx.doi.org/10.1109/TIP.2019.2926208]
Zhang Z Y, Tran L, Liu F and Liu X M. 2022. On learning disentangled representations for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 345-360 [DOI: 10.1109/TPAMI.2020.2998790http://dx.doi.org/10.1109/TPAMI.2020.2998790]
Zhang Z Y, Tran L, Yin X, Atoum Y, Liu X M, Wan J and Wang N X. 2019b. Gait recognition via disentangled representation learning//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 4705-4714 [DOI: 10.1109/CVPR.2019.00484http://dx.doi.org/10.1109/CVPR.2019.00484]
Zheng J K, Liu X C, Liu W, He L X, Yan C G and Mei T. 2022. Gait recognition in the wild with dense 3D representations and a benchmark//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 20228-20237
相关作者
相关机构