Current Issue Cover
融合事件相机的视觉场景识别

刘熠晨, 余磊, 余淮, 杨文(武汉大学电子信息学院, 武汉 430072)

摘 要
目的 传统视觉场景识别(visual place recognition,VPR)算法的性能依赖光学图像的成像质量,因此高速和高动态范围场景导致的图像质量下降会进一步影响视觉场景识别算法的性能。针对此问题,提出一种融合事件相机的视觉场景识别算法,利用事件相机的低延时和高动态范围的特性,提升视觉场景识别算法在高速和高动态范围等极端场景下的识别性能。方法 本文提出的方法首先使用图像特征提取模块提取质量良好的参考图像的特征,然后使用多模态特征融合模块提取查询图像及其曝光区间事件信息的多模态融合特征,最后通过特征匹配查找与查询图像最相似的参考图像。结果 在MVSEC(multi-vehicle stereo event camera dataset)和RobotCar两个数据集上的实验表明,本文方法对比现有视觉场景识别算法在高速和高动态范围场景下具有明显优势。在高速高动态范围场景下,本文方法在MVSEC数据集上相较对比算法最优值在召回率与精度上分别提升5.39%和8.55%,在Robot‐Car数据集上相较对比算法最优值在召回率与精度上分别提升3.36%与4.41%。结论 本文提出了融合事件相机的视觉场景识别算法,利用了事件相机在高速和高动态范围场景的成像优势,有效提升了视觉场景识别算法在高速和高动态范围场景下的场景识别性能。
关键词
Visual place recognition with fusion event cameras

Liu Yichen, Yu Lei, Yu Huai, Yang Wen(Electronic Information School, Wuhan University, Wuhan 430072, China)

Abstract
Objective The performance of traditional visual place recognition(VPR)algorithms depends on the imaging quality of optical images.However,optical cameras suffer from low temporal resolution and dynamic range.For example,in a scene with high-speed motion,continuously capturing the rapid changes in the position of the scene in the imaging plane is difficult for an optical camera,resulting in motion blur in the output image.When the scene brightness exceeds the recording range of the photosensitive chip of the camera,output image degradation of the optical camera such as underexposure and overexposure may occur.The blurring,underexposure,and overexposure of optical images will lead to the loss of image texture structure information,which will result in the performance reduction of visual scene recognition algorithms.Therefore,the recognition performance of image-based VPR algorithms is poor in high-speed and high dynamic range(HDR)scenarios.Event camera is a new type of visual sensor inspired by biological vision.This camera has the characteristics of low latency and HDR.Using event cameras can effectively improve the recognition performance of VPR algorithms in high-speed and HDR scenes.Therefore,this paper proposes a VPR algorithm fused with event cameras,which utilizes the low latency and HDR characteristics of event cameras to improve the recognition performance of VPR algorithms in extreme scenarios such as high speed and HDR.Method The proposed method first fuses the information of the query image and the events within its exposure time interval to obtain the multimodal features of the query location.The method then retrieves the reference image closest to the multimodal features of the query location in the reference image database.This method also extracts the features of the reference image with good quality using the image feature extraction module and then inputs query image and its events within the exposure time interval to the multimodal to compare the multimodal query information with the reference image.Multimodal fusion features are obtained by the multimodal feature fusion module,and the reference image most similar to the query image is finally obtained through feature matching retrieval,thereby completing visual scene recognition.The network training is supervised by a triplet loss.The triplet loss drives the network to learn in the direction where the vector distance between the query and positive features is smaller,and the vector distance between the negative feature is larger,until the difference between the negative distance and the positive distance is not less than the similarity distance constant.Therefore,distinguishing reference images with similar and different fields of view from the query image according to the similarity in the feature vector space is possible,further completing the VPR task.Result The experiments are conducted on the MVSEC and RobotCar datasets.The proposed method is compared in experiments with image-based method,event camera-based method,and methods that utilize image and event camera information.Under different exposure and high-speed scenarios,the proposed method has advantages over existing visual scene recognition algorithms.Specifically,on the MVSEC dataset,the proposed method can reach a maximum recall rate of 99.36%and a maximum recognition accuracy of 96.34%,which improves the recall rate and precision by 5.39% and 8.55%,respectively,compared with the existing VPR methods.On the RobotCar dataset,the proposed method can reach a maximum recall rate of 97.33%and a maximum recognition accuracy of 93.30%,which improves the recall rate and precision by 3.36% and 4.41%,respectively,compared with the existing VPR methods.Experimental results show that in the high-speed and HDR scene,the proposed method has advantages compared with the existing VPR algorithm and demonstrates a remarkable improvement in the recognition performance.Conclusion This paper proposes a VPR algorithm that fuses event cameras,which utilizes the characteristics of low latency and HDR of event cameras and overcomes the problem of image information loss in high-speed and HDR scenes.This method effectively fuses information from image and event modalities,thereby improving the performance of VPR in high-speed and HDR scenarios.
Keywords

订阅号|日报