目的 随着公共安全领域中大规模图像监控及视频数据的增长、智能交通的发展，车辆检索有着极其重要的应用价值。针对已有车辆检索中自动化和智能化水平低、难以获取精确的检索结果等问题，提出了一种多任务分段紧凑特征的车辆检索方法，有效利用车辆基本信息的多样性和关联性实现实时检索。方法 首先，利用相关任务之间的联系提高检索精度和细化图像特征，因此构造了一种多任务深度卷积网络分段地学习车辆不同属性的哈希码，将图像语义和图像表示相结合，且采用最小化图像编码使学习到的车辆的不同属性特征更具有鲁棒性；然后，选用特征金字塔网络提取车辆图像的实例特征并利用局部敏感哈希再排序方法对提取到的特征进行检索；最后，针对无法获取查询车辆目标图像的特殊情况，采用跨模态辅助检索方法进行检索。结果 提出的检索方法在三个公开数据集上均优于目前主流的检索方法，其中在CompCars数据集上检索精度达到0.966. 在VehicleID数据集上检索精度提升至0.862.结论 本文提出的多任务分段紧凑特征的车辆检索方法既能得到最小化图像编码及图像实例特征，还可在无法获取目标检索图像信息时进行跨模态检索，通过实验的对比验证了方法的有效性。
Objective In the field of public safety, large-scale image monitoring and video data are continuously growing. Intelligent transportation is also constantly evolving. Vehicle retrieval has extremely important application value. Existing vehicle retrieval has low levels of automation and intelligence. It is difficult to obtain accurate search results. These retrieval techniques consume a large amount of storage space. To solve these problems, this paper proposes a multi-task segmented compact feature vehicle retrieval method. This method can effectively use the correlation between detection and identification tasks. In order to achieve real-time retrieval, this method fully exploits the diversity of information of vehicle attributes. Existing vehicle retrieval has low levels of automation and intelligence. It is difficult to obtain accurate search results. These retrieval techniques consume a large amount of storage space. To solve these problems, this paper proposes a multi-task segmented compact feature vehicle retrieval method. This method can effectively use the correlation between detection and identification tasks. In order to achieve real-time retrieval, this method fully exploits the diversity of vehicle appearance characteristics. Vehicle retrieval technology based on appearance features can make up for the inadequacy of traditional license plate recognition methods. It has very broad application prospects in illegal inspections, hunting and arresting suspected criminal vehicles.Method First, this paper constructs a multi-tasking deep convolutional network to study the hash code. This way of learning combines image semantics with image representation. It uses the connection between related tasks to improve the retrieval accuracy and refine the image features. This hash code learning method uses a minimum image coding to make the learned vehicle features more robust. Then, we use the feature pyramid network to extract the instance characteristics of the vehicle image. In the retrieval process, the extracted features are sorted using a local sensitive hash reordering method. Finally, In the case of some vehicle searches, it is not possible to obtain a vehicle image. For example, the night vision of the camera is blurred. This paper proposes that cross-modal assisted retrieval can meet the actual requirements of different environments.Result The paper uses two datasets to verify the recognition of multitasking networks. Both data sets have collected large-scale images of different vehicles. The BIT-Vehicle database is a commonly used database for vehicle identification. It contains pictures of 9850 bayonet vehicles. The pictures of these vehicles are divided into 12 categories. It is mainly divided into two tasks: color and model. In order to further verify the fine-grained vehicle classification and multi-tasking network identification accuracy. We use the CompCars dataset that is more subdivided than the BIT-Vehicle dataset. The CompCars dataset contains two parts: a network collection image and a bayonet capture image. We selected the bayonet image part data set and organized it, including a total of 30,000 positive bayonet capture images. The pictures of these vehicles are divided into 11 body color labels, 69 vehicle brands, 281 vehicle models, and 3 vehicle models. Therefore, this data set is more suitable for verification of multitask convolutional neural network recognition performance. In addition, in order to verify the general adaptability of the proposed vehicle retrieval method. The paper adds experimental vehicle retrieval experiments to the VehicleID dataset. The VehicleID dataset contains nearly 200,000 images of approximately 26,000 vehicles captured from surveillance cameras in real-world scenarios in different environments. The VehicleID dataset contains 250 models and 7 colors. The proposed search method outperforms the current mainstream search methods on all three public datasets. Among them, the search accuracy on the CompCars dataset reaches 0.966. The search precision on the VehicleID dataset has been increased to 0.862. Compared with the existing methods, the retrieval accuracy is greatly improved.Conclusion The paper starts from the reality of public safety scenarios and is dedicated to improving the retrieval accuracy of massive video data. We designed a multitask neural network learning method suitable for identification and retrieval. This method unifies multiple feature extraction into the same model and uses end-to-end training. The multi-task segmented compact feature vehicle retrieval method proposed in this paper can achieve the minimum image coding and image feature. This method can also perform cross-modal retrieval when the target retrieval image information cannot be obtained. The effectiveness of the method was verified by comparison of experiments.