刘宇奇,马丙鹏(中国科学院大学计算机科学与技术学院, 北京 100049)
目的 传统行人重识别方法提取到的特征中包含大量的衣物信息,在换装行人重识别问题中,依靠衣物相关的信息难以准确判别行人身份,使模型性能显著下降;虽然一些方法从轮廓图像中提取行人的体型信息以增强行人特征,但轮廓图像的质量参差不齐,鲁棒性差。针对这些问题,本文提出一种素描图像指导的换装行人重识别方法。方法 首先,本文认为相对于轮廓图像,素描图像能够提供更鲁棒、更精准的行人体型信息,因此本文使用素描图像提取行人的体型信息,并将其融入表观特征以获取完备的行人特征。然后,提出一个基于素描图像的衣物无关权重指导模块,进一步使用素描图像中的衣物位置信息指导表观特征的提取过程,从而减少表观特征中的衣物信息,增强表观特征的判别力。结果 在LTCC (long-term cloth changing)和PRCC (person re-identification under moderate clothing change)两个常用换装行人数据集上,本文方法与最先进的方法进行了对比。相较于先进方法,在LTCC和PRCC数据集上,本文方法在Rank-1性能指标上分别提高了6.5%和3.9%。实验结果表明,素描图像在鲁棒性和准确性上均优于轮廓图像,能够更好地获取行人体型信息,并且能够为表观特征提供更多体型互补信息。结论 提出的衣物无关权重指导模块能有效减少行人表观特征中衣物信息的含量;提出的素描图像指导的换装行人重识别方法有效获取了包含衣物无关表观特征和体型特征在内的完备行人特征。
Sketch images-guided clothes-changing person re-identification
Liu Yuqi,Ma Bingpeng(School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, China)
Objective Video surveillance systems have been widely used for public security such as tracking the suspect and looking for missing person. It is really expensive and time-consuming to analyze videos manually. Person re-identification（ReID）aims to match the same person appearing at different times and places under non-overlapping cameras. With the development of deep learning，ReID has gained significant performance increment on the benchmark. It is known that in ReID，the retrieval mainly depends on apparent cues such as clothes information. However，if the surveillance video is captured in a long-time span，people may change their clothes when appearing in the surveillance system. Besides，criminals may also change their clothes to evade surveillance cameras. In such cases，existing methods are likely to fail because they extract unreliable clothes-relevant features. Clothes-changing problem is inevitable in the real-scene application of ReID. Recently，clothes-changing ReID receives a lot of attention. In clothes-changing ReID，every person wears multiple outfits. The key point to clothes-changing ReID is to extract discriminative clothes-irrelevant features from images with different clothes. Usually，body shape can be used to identify people. Some existing methods extract body shape information from contour images but suffer from low image quality and poor robustness. To resolve these problems， we propose a sketch images-guided method for clothes-changing person re-identification method. There are two main approaches in existing methods：1）extracting clothes-irrelevant information such as key points，pose and gait and fuse clothes-irrelevant information into person features. 2）Decoupling clothes-irrelevant and clothes-relevant feature using an encoder-decoder architecture. Method First，to improve the accuracy and robustness of body shape information，we propose to obtain more accurate and robust shape information from sketch images rather than contour images. Then we use an extra independently-trained network to extract the shape features of the person. Additionally，to reduce the clothes information in visual features and improve the discrimination of visual features，we propose a clothes-irrelevant weight guidance module based on sketch images. The module further uses the clothes position information in sketch images to guide the extraction process of visual features. With the guidance，the model can extract features with less clothes information. We use a two-stream network to fuse the shape feature and clothes-irrelevant apparent feature to get the complete person feature. We implement our method by using python and PyTorch. We train our network with one NVIDIA 3090 GPU device. We perform random horizontal flips and random erasing for augmentation. We use Adam optimizer and the learning rate is set to 0. 000 35. The learning rate will decay per 20 batches. Result The performance of the proposed method is evaluated on two public clothes-changing dataset：long-term cloth changing（LTCC）and person re-identification under moderate clothing change （PRCC）. Our method overperforms the state-of-the-art methods on both two datasets. The proposed method obtains 38. 0% Rank-1 and 15. 9% mean average precision（mAP）on LTCC dataset；55. 5% Rank-1 and 52. 6% mAP on PRCC dataset. The results of the ablation experiment demonstrate that the sketch images have their priority in robustness and accuracy compared with contour images. Visible results show that our proposed method can effectively weaken the model’ s attention on the clothes area. Conclusion We propose a better way to extract body shape information and propose a sketch-based guidance module，which utilizes clothes-irrelevant information to wipe out clothing information in visual features. Experiments show that sketch images are superior to contour images in robustness and accuracy. Sketch images can provide more body shape information as a complement to visual features than contour images. The proposed clothes-irrelevant weight guidance module can effectively reduce clothing information in visual features. Our proposed sketch images-guided clothes-changing person re-identification method effectively extracts complete person features，which include clothing-irrelevant visual features and body shape features.
computer vision clothes-changing person re-identification sketch images appearance feature shape feature two-stream network