面向跨模态行人重识别的单模态自监督信息挖掘
Single-modality self-supervised information mining for cross-modality person re-identification
- 2022年27卷第10期 页码:2843-2859
纸质出版日期: 2022-10-16 ,
录用日期: 2022-05-30
DOI: 10.11834/jig.211050
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-10-16 ,
录用日期: 2022-05-30
移动端阅览
吴岸聪, 林城梽, 郑伟诗. 面向跨模态行人重识别的单模态自监督信息挖掘[J]. 中国图象图形学报, 2022,27(10):2843-2859.
Ancong Wu, Chengzhi Lin, Weishi Zheng. Single-modality self-supervised information mining for cross-modality person re-identification[J]. Journal of Image and Graphics, 2022,27(10):2843-2859.
目的
2
在智能监控视频分析领域中,行人重识别是跨无交叠视域的摄像头匹配行人的基础问题。在可见光图像的单模态匹配问题上,现有方法在公开标准数据集上已取得优良的性能。然而,在跨正常光照与低照度场景进行行人重识别的时候,使用可见光图像和红外图像进行跨模态匹配的效果仍不理想。研究的难点主要有两方面:1)在不同光谱范围成像的可见光图像与红外图像之间显著的视觉差异导致模态鸿沟难以消除;2)人工难以分辨跨模态图像的行人身份导致标注数据缺乏。针对以上两个问题,本文研究如何利用易于获得的有标注可见光图像辅助数据进行单模态自监督信息的挖掘,从而提供先验知识引导跨模态匹配模型的学习。
方法
2
提出一种随机单通道掩膜的数据增强方法,对输入可见光图像的3个通道使用掩膜随机保留单通道的信息,使模型关注提取对光谱范围不敏感的特征。提出一种基于三通道与单通道双模型互学习的预训练与微调方法,利用三通道数据与单通道数据之间的关系挖掘与迁移鲁棒的跨光谱自监督信息,提高跨模态匹配模型的匹配能力。
结果
2
跨模态行人重识别的实验在“可见光—红外”多模态行人数据集SYSU-MM01(Sun Yat-Sen University Multiple Modality 01)、RGBNT201(RGB
near infrared
thermal infrared
201)和RegDB上进行。实验结果表明,本文方法在这3个数据集上都达到领先水平。与对比方法中的最优结果相比,在RGBNT201数据集上的平均精度均值mAP(mean average precision)有最高接近5%的提升。
结论
2
提出的单模态跨光谱自监督信息挖掘方法,利用单模态可见光图像辅助数据挖掘对光谱范围变化不敏感的自监督信息,引导单模态预训练与多模态有监督微调,提高跨模态行人重识别的性能。
Objective
2
Urban video surveillance systems have been developing dramatically nowadays. The surveillance videos analysis is essential for security but a huge amount of labor-intensive data processing is highly time-consuming and costly. Intelligent video analysis can be as an effective way to deal with that. To analyze the concrete pedestrians'event
person re-identification is a basic issue of matching pedestrians across non-overlapping cameras views for obtaining the trajectories of persons in a camera network. The cross-camera scene variations are the key challenges for person re-identification
such as illumination
resolution
occlusions and background clutters. Thanks to the development of deep learning
single-modality visible image matching has achieved remarkable performance on benchmark datasets. However
visible image matching is not applicable in low-light scenarios like night-time outdoor scenes or dark indoor scenes. To resilient the related low-light issues
most of surveillance cameras can automatically switch to acquire near infrared images
which are visually different from visible images. When person re-identification is required for the penetration between normal-light and low-light
current person re-identification performance for cross-modality matching between visible images and infrared images cannot be satisfied. Thus
it is necessary to analyze the visible-infrared cross-modality person re-identification further.For visible-infrared cross-modality person re-identification
there are two key challenges as mentioned below: first
the spectrums and visual appearances of visible images and infrared images are significantly different. Visible images contain three channels of red (R)
green (G) and blue (B) responses
while infrared images contain only one channel of near infrared responses. This leads to big modality gap. Next
lack of labeled data is still challenged based on manpower-based identification of the same pedestrian across visible image and infrared image. Current multi-modality benchmark dataset contains 500 personal identities only
which is not sufficient for training deep models. Existing visible-infrared cross-modality person re-identification methods mainly focus on bridging the modality gap. The small labeled data problem is still largely ignored by these methods.
Method
2
To provide prior knowledge for learning cross-modality matching model
we study self-supervised information mining on single-modality data based on auxiliary labeled visible images. First
we propose a data augmentation method called random single-channel mask. For three-channel visible images as input
random masks are applied to preserve the information of only one channel
to realize the robustness of features against spectrum change. The random single-channel mask can force the first layer of convolutional neural network to learn kernels that are specific to R
G or B channels for extracting shared appearance shape features. Furthermore
for pre-training and fine-tuning
we propose mutual learning between single-channel model and three-channel model. To mine and transfer cross-spectrum robust self-supervision information
mutual learning leverages the interrelations between single-channel data and three-channel data. We sort out that the three-channel model focuses on extracting color-sensitive features
and the single-channel model focuses on extracting color-invariant features. Transferring complementary knowledge by mutual learning improves the matching performance of the cross-modality matching model.
Result
2
Extensive comparative experiments were conducted on SYSU-MM01
RGBNT201 and RegDB datasets. Compared with the state-of-the-art methods
our method improve mean average precision (mAP) on RGBNT201 by 5% at most.
Conclusion
2
We propose a single-modality cross-spectrum self-supervised information mining method
which utilizes auxiliary single-modality visible images to mine cross-spectrum robust self-supervision information. The prior knowledge of the self-supervision information can guide single-modality pretraining and multi-modality finetuning for achieving better matching ability of the cross-modality person re-identification model.
行人重识别跨模态检索红外图像自监督学习互学习
person re-identificationcross-modality retrievalinfrared imageself-supervised learningmutual learning
Ahmed E, Jones M and Marks T K. 2015. An improved deep learning architecture for person re-identification//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE: 3908-3916[DOI:10.1109/CVPR.2015.7299016http://dx.doi.org/10.1109/CVPR.2015.7299016]
Chen D, Li Y Z, Yu P Z and Shao C B. 2020. Research and prospect of cross modality person re-identification. Computer Systems and Applications, 29(10): 20-28
陈丹, 李永忠, 于沛泽, 邵长斌. 2020. 跨模态行人重识别研究与展望. 计算机系统应用, 29(10): 20-28[DOI: 10.15888/j.cnki.csa.007621]
Chen Y, Wan L, Li Z H, Jing Q Y and Sun Z Y. 2021. Neural feature search for RGB-infrared person re-identification//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 587-597[DOI:10.1109/CVPR46437.2021.00065http://dx.doi.org/10.1109/CVPR46437.2021.00065]
Dai P Y, Ji R R, Wang H B, Wu Q and Huang Y Y. 2018. Cross-modality person re-identification with generative adversarial training//Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI). Stockholm, Sweden: AAAI: 677-683[DOI:10.24963/IJCAI.2018/94http://dx.doi.org/10.24963/IJCAI.2018/94]
Fan X, Luo H, Zhang C and Jiang W. 2020. Cross-spectrum dual-subspace pairing for RGB-infrared cross-modality person re-identification[EB/OL]. [2020-02-29].https://arxiv.org/pdf/2003.00213.pdfhttps://arxiv.org/pdf/2003.00213.pdf
Fu C Y, Hu Y B, Wu X, Shi H L, Mei T and He R. 2021. CM-NAS: cross-modality neural architecture search for visible-infrared person re-identification//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 11803-11812[DOI:10.1109/ICCV48922.2021.01161http://dx.doi.org/10.1109/ICCV48922.2021.01161]
Gao Y J, Liang T F, Jin Y, Gu X Y, Liu W, Li Y D and Lang C Y. 2021. MSO: multi-feature space joint optimization network for RGB-infrared person re-identification//Proceedings of the 29th ACM International Conference on Multimedia. Chengdu, China: ACM: 5257-5265[DOI:10.1145/3474085.3475643http://dx.doi.org/10.1145/3474085.3475643]
Ge Y X, Chen D P and Li H S. 2020. Mutual mean-teaching: pseudo label refinery for unsupervised domain adaptation on person re-identification//Proceedings of the 8th International Conference on Learning Representations (ICLR). Addis Ababa, Ethiopia: OpenReview. net
Hao X, Zhao S Y, Ye M and Shen J B. 2021. Cross-modality person reidentification via modality confusion and center aggregation//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 16383-16392[DOI:10.1109/ICCV48922.2021.01609http://dx.doi.org/10.1109/ICCV48922.2021.01609]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 770-778[DOI:10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
He L X, Liao X Y, Liu W, Liu X C, Cheng P and Mei T. 2020. FastReID: a pytorch toolbox for general instance re-identification[EB/OL]. [2020-07-15].https://arxiv.org/pdf/2006.02631.pdfhttps://arxiv.org/pdf/2006.02631.pdf
Hermans A, Beyer L and Leibe B. 2017. In defense of the triplet loss for person re-identification[EB/OL]. [2017-11-21].https://arxiv.org/pdf/1703.07737v2.pdfhttps://arxiv.org/pdf/1703.07737v2.pdf
Hinton G, Vinyals O and Dean J. 2015. Distilling the knowledge in a neural network[EB/OL]. [2015-03-09].https://arxiv.org/pdf/1503.02531.pdfhttps://arxiv.org/pdf/1503.02531.pdf
Kingma D P and Ba J. 2015. Adam: a method for stochastic optimization//Proceedings of the 3rd International Conference on Learning Representations (ICLR). San Diego, USA: [s. n.]
Li D G, Wei X, Hong X P and Gong Y H. 2020. Infrared-visible cross-modal person re-identification with an X modality//Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI). New York, USA: AAAI: 4610-4617[DOI:10.1609/AAAI.V34I04.5891http://dx.doi.org/10.1609/AAAI.V34I04.5891]
Liang W Q, Wang G C, Lai J H and Xie X H. 2021. Homogeneous-to-heterogeneous: unsupervised learning for RGB-infrared person reidentification. IEEE Transactions on Image Processing, 30: 6392-6407[DOI: 10.1109/TIP.2021.3092578]
Liao S C, Hu Y, Zhu X Y and Li S Z. 2015. Person re-identification by local maximal occurrence representation and metric learning//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE: 2197-2206[DOI:10.1109/CVPR.2015.7298832http://dx.doi.org/10.1109/CVPR.2015.7298832]
Ling Y G, Zhong Z, Luo Z M, Rota P, Li S Z and Sebe N. 2020. Class-aware modality mix and center-guided metric learning for visible-thermal person re-identification//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 889-897[DOI:10.1145/3394171.3413821http://dx.doi.org/10.1145/3394171.3413821]
Lu Y, Wu Y, Liu B, Zhang T Z, Li B P, Chu Q and Yu N H. 2020. Cross-modality person re-identification with shared-specific feature transfer//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 13376-13386[DOI:10.1109/CVPR42600.2020.01339http://dx.doi.org/10.1109/CVPR42600.2020.01339]
Luo H, Jiang W, Fan X and Zhang S P. 2019. A survey on deep learning based person re-identification. Acta Automatica Sinica, 45(11): 2032-2049
罗浩, 姜伟, 范星, 张思朋. 2019. 基于深度学习的行人重识别研究进展. 自动化学报, 45(11): 2032-2049[DOI: 10.16383/j.aas.c180154]
Meng J K, Zheng W S, Lai J H and Wang L. 2021. Deep graph metric learning for weakly supervised person re-identification. IEEETransactions on Pattern Analysis and Machine Intelligence: #3084613[DOI: 10.1109/TPAMI.2021.3084613]
Nguyen D T, Hong H G, Kim K W and Park K R. 2017. Person recognition system based on a combination of body images from visible light and thermal cameras. Sensors, 17(3): #605[DOI: 10.3390/S17030605]
Park H, Lee S, Lee J and Ham B. 2021. Learning by aligning: visible-infrared person re-identification using cross-modal correspondences//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 12026-12035[DOI:10.1109/ICCV48922.2021.01183http://dx.doi.org/10.1109/ICCV48922.2021.01183]
Ristani E, Solera F, Zou R, Cucchiara R and Tomasi C. 2016. Performance measures and a data set for multi-target, multi-camera tracking//Proceedings of European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 17-35[DOI:10.1007/978-3-319-48881-3_2http://dx.doi.org/10.1007/978-3-319-48881-3_2]
Shen Q, Tian C, Wang J B, Jiao S S and Du L. 2020. Multi-resolution feature attention fusion method for person reidentification. Journal of Image and Graphics, 25(5): 946-955
沈庆, 田畅, 王家宝, 焦珊珊, 杜麟. 2020. 多分辨率特征注意力融合行人再识别. 中国图象图形学报, 25(5): 946-955[DOI: 10.11834/jig.190237]
Shi W D, Zhang Y Z, Liu S W, Zhu S D and Bao J N. 2020. Person re-identification based on deformation and occlusion mechanisms. Journal of Image and Graphics, 25(12): 2530-2540
史维东, 张云洲, 刘双伟, 朱尚栋, 暴吉宁. 2020. 针对形变与遮挡问题的行人再识别. 中国图象图形学报, 25(12): 2530-2540[DOI: 10.11834/jig.200016]
Sun Y F, Cheng C M, Zhang Y H, Zhang C, Zheng L, Wang Z D and Wei Y C. 2020. Circle loss: a unified perspective of pair similarity optimization//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE: 6397-6406[DOI:10.1109/CVPR42600.2020.00643http://dx.doi.org/10.1109/CVPR42600.2020.00643]
Sun Y F, Zheng L, Yang Y, Tian Q and Wang S J. 2018. Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline)//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 501-518[DOI:10.1007/978-3-030-01225-0_30http://dx.doi.org/10.1007/978-3-030-01225-0_30]
Tian X D, Zhang Z Z, Lin S H, Qu Y Y, Xie Y and Ma L Z. 2021. Farewell to mutual information: variational distillation for cross-modal person re-identification//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 1522-1531[DOI:10.1109/CVPR46437.2021.00157http://dx.doi.org/10.1109/CVPR46437.2021.00157]
Wang G A, Zhang T Z, Cheng J, Liu S, Yang Y and Hou Z G. 2019a. RGB-infrared cross-modality person re-identification via joint pixel and feature alignment//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea(South): IEEE: 3622-3631[DOI:10.1109/ICCV.2019.00372http://dx.doi.org/10.1109/ICCV.2019.00372]
Wang G S, Yuan Y F, Chen X, Li J W and Zhou X. 2018. Learning discriminative features with multiple granularities for person re-identification//Proceedings of the 26th ACM International Conference on Multimedia. Seoul, Korea(South): ACM: 274-282[DOI:10.1145/3240508.3240552http://dx.doi.org/10.1145/3240508.3240552]
Wang Y N, Liao S C and Shao L. 2020. Surpassing real-world source training data: random 3D characters for generalizable person re-identification//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 3422-3430[DOI:10.1145/3394171.3413815http://dx.doi.org/10.1145/3394171.3413815]
Wang Z X, Wang Z, Zheng Y Q, Chuang Y Y and Satoh S. 2019b. Learning to reduce dual-level discrepancy for infrared-visible person re-identification//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE: 618-626[DOI:10.1109/CVPR.2019.00071http://dx.doi.org/10.1109/CVPR.2019.00071]
Wei L H, Zhang S L, Gao W and Tian Q. 2018. Person transfer GAN to bridge domain gap for person re-identification//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE: 79-88[DOI:10.1109/CVPR.2018.00016http://dx.doi.org/10.1109/CVPR.2018.00016]
Wei X, Li D G, Hong X P, Ke W and Gong Y H. 2020. Co-attentive lifting for infrared-visible person re-identification//Proceedings of the 28th ACM International Conference on Multimedia. Seattle, USA: ACM: 1028-1037[DOI:10.1145/3394171.3413933http://dx.doi.org/10.1145/3394171.3413933]
Wei Z Y, Yang X, Wang N N and Gao X B. 2021. Syncretic modality collaborative learning for visible infrared person re-identification//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 225-234[DOI:10.1109/ICCV48922.2021.00029http://dx.doi.org/10.1109/ICCV48922.2021.00029]
Wu A C, Zheng W S, Gong S G and Lai J H. 2020. RGB-IR person reidentification by cross-modality similarity preservation. International Journal of Computer Vision, 128(6): 1765-1785[DOI: 10.1007/S11263-019-01290-1]
Wu A C, Zheng W S, Yu H X, Gong S G and Lai J H. 2017. RGB-infrared cross-modality person re-identification//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE: 5390-5399[DOI:10.1109/ICCV.2017.575http://dx.doi.org/10.1109/ICCV.2017.575]
Wu Q, Dai P Y, Chen J, Lin C W, Wu Y J, Huang F Y, Zhong B N and Ji R R. 2021. Discover cross-modality nuances for visible-infrared person re-identification//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 4328-4337[DOI:10.1109/CVPR46437.2021.00431http://dx.doi.org/10.1109/CVPR46437.2021.00431]
Yang Q Z, Wu A C and Zheng W S. 2021. Person re-identification by contour sketch under moderate clothing change. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6): 2029-2046[DOI: 10.1109/TPAMI.2019.2960509]
Ye M, Lan X Y, Li J W and Yuen P. 2018. Hierarchical discriminative learning for visible thermal person re-identification//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI: 7501-7508[DOI:10.1609/aaai.v32i1.12293http://dx.doi.org/10.1609/aaai.v32i1.12293]
Ye M, Ruan W J, Du B and Shou M Z. 2021. Channel augmented joint learning for visible-infrared recognition//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE: 13547-13556[DOI:10.1109/ICCV48922.2021.01331http://dx.doi.org/10.1109/ICCV48922.2021.01331]
Ye M, Shen J B, Crandall D J, Shao L and Luo J B. 2020. Dynamic dual-attentive aggregation learning for visible-infrared person re-identification//Proceedings of the 16th European Conference on Computer Vision (ECCV). Glasgow, UK: Springer: 229-247[DOI:10.1007/978-3-030-58520-4_14http://dx.doi.org/10.1007/978-3-030-58520-4_14]
Ye M, Shen J B, Lin G J, Xiang T, Shao L and Hoi S C H. 2022. Deep learning for person re-identification: a survey and outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6): 2872-2893[DOI: 10.1109/TPAMI.2021.3054775]
Yin J H, Wu A C and Zheng W S. 2020. Fine-grained person re-identification. International Journal of Computer Vision, 128(6): 1654-1672[DOI: 10.1007/s11263-019-01259-0]
Yu H X, Wu A C and Zheng W S. 2020. Unsupervised person re-identification by deep asymmetric metric embedding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4): 956-973[DOI: 10.1109/TPAMI.2018.2886878]
Zhang T Y, Xie L X, Wei L H, Zhuang Z J, Zhang Y F, Li B and Tian Q. 2021a. UnrealPerson: an adaptive pipeline towards costless person re-identification//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 11501-11510[DOI:10.1109/CVPR46437.2021.01134http://dx.doi.org/10.1109/CVPR46437.2021.01134]
Zhang Y, Xiang T, Hospedales T M and Lu H C. 2018. Deep mutual learning//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE: 4320-4328[DOI:10.1109/CVPR.2018.00454http://dx.doi.org/10.1109/CVPR.2018.00454]
Zhang Y K, Yan Y, Lu Y and Wang H Z. 2021b. Towards a unified middle modalitylearning for visible-infrared person re-identification//Proceedings of the 29th ACM International Conference on Multimedia. Chengdu, China: ACM: 788-796[DOI:10.1145/3474085.3475250http://dx.doi.org/10.1145/3474085.3475250]
Zheng A H, Wang Z, Chen Z H, Li C L and Tang J. 2021a. Robust multi-modality person re-identification//Proceedings of the 35th AAAI Conference on Artificial Intelligence. [s. l.]: AAAI: 3529-3537
Zheng K C, Liu W, He L X, Mei T, Luo J B and Zha Z J. 2021b. Group-aware label transfer for domain adaptive person re-identification//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE: 5306-5315[DOI:10.1109/CVPR46437.2021.00527http://dx.doi.org/10.1109/CVPR46437.2021.00527]
Zheng L, Shen L Y, Tian L, Wang S J, Wang J D and Tian Q. 2015. Scalable person re-identification: a benchmark//Proceedings of 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE: 1116-1124[DOI:10.1109/ICCV.2015.133http://dx.doi.org/10.1109/ICCV.2015.133]
Zheng W S, Hong J C, Jiao J N, Wu A C, Zhu X T, Gong S G, Qin J Y and Lai J H. 2022. Joint bilateral-resolution identity modeling for cross-resolution person re-identification. International Journal of Computer Vision, 130(1): 136-156[DOI: 10.1007/s11263-021-01518-z]
Zheng W S, Gong S G and Xiang T. 2013. Reidentification by relative distance comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(3): 653-668[DOI: 10.1109/TPAMI.2012.138]
Zheng W S, Gong S G and Xiang T. 2016. Towards open-world person re-identification by one-shot group-based verification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3): 591-606[DOI: 10.1109/TPAMI.2015.2453984]
Zhu Y X, Yang Z, Wang L, Zhao S, Hu X and Tao D P. 2020. Hetero-center loss for cross-modality person re-identification. Neurocomputing, 386: 97-109[DOI:0.1016/J.NEUCOM.2019.12.100]
相关作者
相关机构