Current Issue Cover
面向目标检测的对抗样本综述

袁珑1, 李秀梅1, 潘振雄1, 孙军梅1, 肖蕾2(1.杭州师范大学信息科学与技术学院, 杭州 311121;2.福建省软件评测工程技术研究中心, 厦门 361024)

摘 要
目标检测是一种广泛应用于工业控制和航空航天等安全攸关场景的重要技术。随着深度学习在目标检测领域的应用,检测精度得到较大提升,但由于深度学习固有的脆弱性,使得基于深度学习的目标检测技术的可靠性和安全性面临新的挑战。本文对面向目标检测的对抗样本生成及防御的研究分析和总结,致力于为增强目标检测模型的鲁棒性和提出更好的防御策略提供思路。首先,介绍对抗样本的概念、产生原因以及目标检测领域对抗样本生成常用的评价指标和数据集。然后,根据对抗样本生成的扰动范围将攻击分为全局扰动攻击和局部扰动攻击,并在此分类基础上,分别从攻击的目标检测器类型、损失函数设计等方面对目标检测的对抗样本生成方法进行分析和总结,通过实验对比了几种典型目标检测对抗攻击方法的性能,同时比较了这几种方法的跨模型迁移攻击能力。此外,本文对目前目标检测领域常用的对抗防御策略进行了分析和归纳。最后,总结了目标检测领域对抗样本的生成及防御面临的挑战,并对未来发展方向做出展望。
关键词
Review of adversarial examples for object detection

Yuan Long1, Li Xiumei1, Pan Zhenxiong1, Sun Junmei1, Xiao Lei2(1.School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China;2.Engineering Research Center for Software Testing and Evaluation of Fujian Province, Xiamen 361024, China)

Abstract
Object detection is essential for various of applications like semantic segmentation and human facial recognition, and it has been widely employed in public security related scenarios, including automatic driving, industrial control, and aerospace applications. Traditional object detection technology requires manual-based feature extraction and machine learning methods for classification, which is costly and inaccuracy for detection. Recent deep learning based object detection technology has gradually replaced the traditional object detection technology due to its high detection efficiency and accuracy. However, it has been proved that convolutional neural network (CNN) can be easily fooled by some imperceptible perturbations. These images with the added imperceptible perturbations are called adversarial examples. Adversarial examples were first discovered in the field of image classification, and were gradually developed into other fields. To clarify the vulnerabilities of adversarial attack and deep object detection system it is of great significance to improve the robustness and security of the deep learning based object detection model by using a holistic approach. We aims to enhancing the robustness of object detection models and putting forward defense strategies better in terms of analyzing and summarizing the adversarial attack and defense methods for object detection recently. First, our review is focused on the discussion of the development of object detection, and then introduces the origin, growth, motives of emergence and related terminologies of adversarial examples. The commonly evaluation metrics used and data sets in the generation of adversarial examples in object detection are also introduced. Then, 15 adversarial example generation algorithms for object detection, according to the generation of perturbation level classification, are classified as global perturbation attack and local perturbation attack. A secondary classification under the global perturbation attack is made in terms of the types of of attacks detector like attack on two-stage network, attack on one-stage network, and attack on both kinds of networks. Furthermore, these attack methods are classified and summarized from the following perspectives as mentioned below:1) the attack methods can be divided into black box attack and white box attack based on the issue of whether the attacker knows the information of the model's internal structure and parameters or not. 2) The attack methods can be divided into target attack and non-target attack derived from the identification results of the generated adversarial examples. 3) The attack methods can be divided into three categories:L0, L2 and L via the perturbation norm used by the attack algorithm. 4) The attack methods can be divided into single loss function attack and combined loss function attack based on the loss function design of attack algorithm. These methods are summarized and analyzed on six aspects of the object detector type and the loss function design, and the following rules of the current adversarial example generation technology for object detection are obtained:1) diversities of attack forms:a variety of adversarial loss functions are combined with the design of adversarial attack methods, such as background loss and context loss. In addition, the diversity of attack forms is also reflected in the context of diversity of attack methods. Global perturbations and local perturbations are represented by patch attacks both. 2) Diversities of attack objects:with the development of object detection technology, the types of detectors become more diverse, which makes the adversarial examples generation technology against detectors become more changeable, including one-stage attack, two-stage attack, as well as the attack against anchor-free detector. It is credible that future adversarial examples attacks against new techniques of object detection have its potentials. 3) Most of the existing adversarial attack methods are white box attack methods for specific detector, while few are black box attack methods. The reason is that object detection model is more complex and the training cycle is longer compared to image classification, so attacking object detection requires more model information to generate reliable adversarial examples. The issue of designing more and more effective black box attacks can be as a future research direction as well. Additionally, we select four classical methods of those are dense adversary generation (DAG), robust adversarial perturbation (RAP), unified and efficient adversary (UEA), and targeted adversarial objectness gradient attacks (TOG), and carry out comparative analysis through experiments. Then, the commonly attack and defense strategies are introduced from the perspectives of preprocessing and improving the robustness of the model, and these methods are summarized. The current methods of defense against examples are few, and the effect is not sufficient due to the specialty of object detection. Furthermore, the transferability of these models is compared to you only look once (YOLO) -Darknet and single shot multibox detector (SSD300) models, and the experimental results show that the UEA method has the best transferability among these methods. Finally, we summarize the challenges in the generation and defense of adversarial examples for object detection from the following three perspectives:1) to enhance the transferability of adversarial examples for object detection. Transfer ability is one of the most important metrics to measure adversarial examples, especially in object detection technology. It is potential to enhance the transferability of adversarial examples to attack most object detection systems. 2) To facilitate adversarial defense for object detection. Current adversarial examples attack paths are lack of effective defenses. To enhance the robustness of object detection, it is developed for defense research against adversarial examples further. 3) Decrease the disturbance size and increase the generation speed of adversarial examples. Future development of it is possible to develop adversarial examples for object detection in related to shorter generation time and smaller generation disturbance in the future.
Keywords

订阅号|日报