面向目标检测的对抗样本综述
Review of adversarial examples for object detection
- 2022年27卷第10期 页码:2873-2896
纸质出版日期: 2022-10-16 ,
录用日期: 2021-08-04
DOI: 10.11834/jig.210209
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2022-10-16 ,
录用日期: 2021-08-04
移动端阅览
袁珑, 李秀梅, 潘振雄, 孙军梅, 肖蕾. 面向目标检测的对抗样本综述[J]. 中国图象图形学报, 2022,27(10):2873-2896.
Long Yuan, Xiumei Li, Zhenxiong Pan, Junmei Sun, Lei Xiao. Review of adversarial examples for object detection[J]. Journal of Image and Graphics, 2022,27(10):2873-2896.
目标检测是一种广泛应用于工业控制和航空航天等安全攸关场景的重要技术。随着深度学习在目标检测领域的应用,检测精度得到较大提升,但由于深度学习固有的脆弱性,使得基于深度学习的目标检测技术的可靠性和安全性面临新的挑战。本文对面向目标检测的对抗样本生成及防御的研究分析和总结,致力于为增强目标检测模型的鲁棒性和提出更好的防御策略提供思路。首先,介绍对抗样本的概念、产生原因以及目标检测领域对抗样本生成常用的评价指标和数据集。然后,根据对抗样本生成的扰动范围将攻击分为全局扰动攻击和局部扰动攻击,并在此分类基础上,分别从攻击的目标检测器类型、损失函数设计等方面对目标检测的对抗样本生成方法进行分析和总结,通过实验对比了几种典型目标检测对抗攻击方法的性能,同时比较了这几种方法的跨模型迁移攻击能力。此外,本文对目前目标检测领域常用的对抗防御策略进行了分析和归纳。最后,总结了目标检测领域对抗样本的生成及防御面临的挑战,并对未来发展方向做出展望。
Object detection is essential for various of applications like semantic segmentation and human facial recognition
and it has been widely employed in public security related scenarios
including automatic driving
industrial control
and aerospace applications. Traditional object detection technology requires manual-based feature extraction and machine learning methods for classification
which is costly and inaccuracy for detection. Recent deep learning based object detection technology has gradually replaced the traditional object detection technology due to its high detection efficiency and accuracy. However
it has been proved that convolutional neural network (CNN) can be easily fooled by some imperceptible perturbations. These images with the added imperceptible perturbations are called adversarial examples. Adversarial examples were first discovered in the field of image classification
and were gradually developed into other fields. To clarify the vulnerabilities of adversarial attack and deep object detection system it is of great significance to improve the robustness and security of the deep learning based object detection model by using a holistic approach. We aims to enhancing the robustness of object detection models and putting forward defense strategies better in terms of analyzing and summarizing the adversarial attack and defense methods for object detection recently. First
our review is focused on the discussion of the development of object detection
and then introduces the origin
growth
motives of emergence and related terminologies of adversarial examples. The commonly evaluation metrics used and data sets in the generation of adversarial examples in object detection are also introduced. Then
15 adversarial example generation algorithms for object detection
according to the generation of perturbation level classification
are classified as global perturbation attack and local perturbation attack. A secondary classification under the global perturbation attack is made in terms of the types of of attacks detector like attack on two-stage network
attack on one-stage network
and attack on both kinds of networks. Furthermore
these attack methods are classified and summarized from the following perspectives as mentioned below: 1) the attack methods can be divided into black box attack and white box attack based on the issue of whether the attacker knows the information of the model's internal structure and parameters or not. 2) The attack methods can be divided into target attack and non-target attack derived from the identification results of the generated adversarial examples. 3) The attack methods can be divided into three categories: L
0
L
2
and L
∞
via the perturbation norm used by the attack algorithm. 4) The attack methods can be divided into single loss function attack and combined loss function attack based on the loss function design of attack algorithm. These methods are summarized and analyzed on six aspects of the object detector type and the loss function design
and the following rules of the current adversarial example generation technology for object detection are obtained: 1) diversities of attack forms: a variety of adversarial loss functions are combined with the design of adversarial attack methods
such as background loss and context loss. In addition
the diversity of attack forms is also reflected in the context of diversity of attack methods. Global perturbations and local perturbations are represented by patch attacks both. 2) Diversities of attack objects: with the development of object detection technology
the types of detectors become more diverse
which makes the adversarial examples generation technology against detectors become more changeable
including one-stage attack
two-stage attack
as well as the attack against anchor-free detector. It is credible that future adversarial examples attacks against new techniques of object detection have its potentials. 3) Most of the existing adversarial attack methods are white box attack methods for specific detector
while few are black box attack methods. The reason is that object detection model is more complex and the training cycle is longer compared to image classification
so attacking object detection requires more model information to generate reliable adversarial examples. The issue of designing more and more effective black box attacks can be as a future research direction as well. Additionally
we select four classical methods of those are dense adversary generation (DAG)
robust adversarial perturbation (RAP)
unified and efficient adversary (UEA)
and targeted adversarial objectness gradient attacks (TOG)
and carry out comparative analysis through experiments. Then
the commonly attack and defense strategies are introduced from the perspectives of preprocessing and improving the robustness of the model
and these methods are summarized. The current methods of defense against examples are few
and the effect is not sufficient due to the specialty of object detection. Furthermore
the transferability of these models is compared to you only look once (YOLO)-Darknet and single shot multibox detector (SSD300) models
and the experimental results show that the UEA method has the best transferability among these methods. Finally
we summarize the challenges in the generation and defense of adversarial examples for object detection from the following three perspectives: 1) to enhance the transferability of adversarial examples for object detection. Transfer ability is one of the most important metrics to measure adversarial examples
especially in object detection technology. It is potential to enhance the transferability of adversarial examples to attack most object detection systems. 2) To facilitate adversarial defense for object detection. Current adversarial examples attack paths are lack of effective defenses. To enhance the robustness of object detection
it is developed for defense research against adversarial examples further. 3) Decrease the disturbance size and increase the generation speed of adversarial examples. Future development of it is possible to develop adversarial examples for object detection in related to shorter generation time and smaller generation disturbance in the future.
目标检测对抗样本深度学习对抗防御全局扰动局部扰动
object detectionadversarial examplesdeep learningadversarial defenseglobal perturbationlocal perturbation
Akhtar N, Liu J and Mian A. 2018. Defense against universal adversarial perturbations//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 3389-3398 [DOI: 10.1109/CVPR.2018.00357http://dx.doi.org/10.1109/CVPR.2018.00357]
Akhtar N and Mian A. 2018. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access, 6: 14410-14430 [DOI: 10.1109/ACCESS.2018.2807385]
Athalye A, Engstrom L, Ilyas A and Kwok K. 2018. Synthesizing robust adversarial examples//Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR: 284-293
Baluja S and Fischer I. 2017. Adversarial transformation networks: learning to generate adversarial examples [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1703.09387.pdfhttps://arxiv.org/pdf/1703.09387.pdf
Bertinetto L, Valmadre J, Henriques J F, Vedaldi A and Torr P H S. 2016. Fully-convolutional Siamese networks for object tracking//Proceedings of 2016 European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 850-865 [DOI: 10.1007/978-3-319-48881-3_56http://dx.doi.org/10.1007/978-3-319-48881-3_56]
Bochkovskiy A, Wang C Y and Liao H Y M. 2020. YOLOv4: optimalspeed and accuracy of object detection [EB/OL]. [2021-02-24].https://arxiv.org/pdf/2004.10934.pdfhttps://arxiv.org/pdf/2004.10934.pdf
Bouabid S and Delaitre V. 2020. Mixup regularization for region proposal based object detectors [EB/OL]. [2021-02-24].https://arxiv.org/pdf/2003.02065.pdfhttps://arxiv.org/pdf/2003.02065.pdf
Brown T B, Mané D, Roy A, Abadi M and Gilmer J. 2017. Adversarial patch [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1712.09665.pdfhttps://arxiv.org/pdf/1712.09665.pdf
Carlini N and Wagner D. 2017. Towards evaluating the robustness of neural networks//Proceedings of 2017 IEEE Symposium on Security and Privacy (SP). San Jose, USA: IEEE: 39-57 [DOI: 10.1109/SP.2017.49http://dx.doi.org/10.1109/SP.2017.49]
Cao J L, Li Y L, Sun H Q, Xie J, Huang K Q and Pang Y W. 2022. A survey on deep learning based visual object detection. Journal of Image and Graphics, 27(6): 1697-1722.
曹家乐, 李亚利, 孙汉卿, 谢今, 黄凯奇, 庞彦伟. 2022. 基于深度学习的视觉目标检测技术综述. 中国图象图形学报, 27(6): 1697-1722. [DOI: 10.11834/jig.220069]
Chen S T, Cornelius C, Martin J and Chau D H. 2019. ShapeShifter: robust physical adversarial attack on faster R-CNN object detector//Proceedings of 2019 European Conference on Machine Learning and Knowledge Discovery in Databases. Dublin, Ireland: Springer: 52-68 [DOI: 10.1007/978-3-030-10925-7_4http://dx.doi.org/10.1007/978-3-030-10925-7_4]
Chiang P Y, Curry M J, Abdelkader A, Kumar A, Dickerson J and Goldstein T. 2020. Detection as regression: certified object detection by median smoothing//Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020). Vancouver, Canada: [s. n.]
Chow K H, Liu L, Gursoy M E, Truex S, Wei W Q and Wu Y Z. 2020a. TOG: targeted adversarial objectness gradient attacks on real-time object detection system [EB/OL]. [2021-02-24].https://arxiv.org/pdf/2004.04320.pdfhttps://arxiv.org/pdf/2004.04320.pdf
Chow K H, Liu L, Gursoy M E, Truex S, Wei W Q and Wu Y Z. 2020b. Understanding object detection through an adversarial lens//Proceedings of the 25th European Symposium on Research in Computer Security. Guildford, UK: Springer: 460-481 [DOI: 10.1007/978-3-030-59013-0_23http://dx.doi.org/10.1007/978-3-030-59013-0_23]
Deng J, Dong W, Socher R, Li L J, Li K and Li F F. 2009. ImageNet: a large-scale hierarchical image database//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE: 248-255 [DOI: 10.1109/cvpr.2009.5206848http://dx.doi.org/10.1109/cvpr.2009.5206848]
Ding S and Zhao K. 2018. Research on daily objects detection based on deep neural network. IOP Conference Series: Materials Science and Engineering, 322(6): #062024 [DOI: 10.1088/1757-899x/322/6/062024]
Divvala S K, Efros A A and Hebert M. 2012. How important are "deformable parts" in the deformable parts model?//Proceedings of 2012 European Conference on Computer Vision. Florence, Italy: Springer: 31-40 [DOI: 10.1007/978-3-642-33885-4_4http://dx.doi.org/10.1007/978-3-642-33885-4_4]
Dong Y P, Liao F Z, Pang T Y, Su H, Zhu J, Hu X L and Li J G. 2018. Boosting adversarial attacks with momentum//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 9185-9193 [DOI: 10.1109/CVPR.2018.00957http://dx.doi.org/10.1109/CVPR.2018.00957]
Du T Y, Ji S L, Li J F, Gu Q C, Wang T and Beyah R. 2020. SirenAttack: generating adversarial audio for end-to-end acoustic systems//Proceedings of the 15th ACM Asia Conference on Computer and Communications Security. Taipei, China: ACM: 357-369 [DOI: 10.1145/3320269.3384733http://dx.doi.org/10.1145/3320269.3384733]
Duan K W, Bai S, Xie L X, Qi H G, Huang Q M and Tian Q. 2019. CenterNet: keypoint triplets for object detection//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision.Seoul, Korea (South): IEEE: 6568-6577 [DOI: 10.1109/ICCV.2019.00667http://dx.doi.org/10.1109/ICCV.2019.00667]
Dziugaite G K, Ghahramani Z and Roy D M. 2016. A study of the effect of JPG compression on adversarial images [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1608.00853.pdfhttps://arxiv.org/pdf/1608.00853.pdf
Everingham M, Van Gool L, Williams C K I, Winn J and Zisserman A. 2010. The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2): 303-338 [DOI: 10.1007/s11263-009-0275-4]
Evtimov I, Eykholt K, Fernandes E, Kohno T, Li B, Prakash A, Rahmati A and Song D. 2017. Robust physical-world attacks on machine learning models [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1707.08945v2.pdfhttps://arxiv.org/pdf/1707.08945v2.pdf
Girshick R. 2015. Fast R-CNN//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE: 1440-1448 [DOI: 10.1109/ICCV.2015.169http://dx.doi.org/10.1109/ICCV.2015.169]
Girshick R, Donahue J, Darrell T and Malik J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE: 580-587 [DOI: 10.1109/CVPR.2014.81http://dx.doi.org/10.1109/CVPR.2014.81]
Goodfellow I J, Shlens J and Szegedy C. 2015. Explaining and harnessing adversarial examples//Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: [s. n.]
He K M, Zhang X Y, Ren S Q and Sun J. 2015. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9): 1904-1916 [DOI: 10.1109/TPAMI.2015.2389824]
Huang L C, Yang Y, Deng Y F and Yu Y N. 2015. DenseBox: unifying landmark localization with end to end object detection [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1509.04874.pdfhttps://arxiv.org/pdf/1509.04874.pdf
Ilyas A, Santurkar S, Tsipras D, Engstrom L, Tran B and Mądry A. 2019. Adversarial examples are not bugs, they are features//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc. : #12
Isola P, Zhu J Y, Zhou T H and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 5967-5976 [DOI: 10.1109/cvpr.2017.632http://dx.doi.org/10.1109/cvpr.2017.632]
Kennedy J and Eberhart R. 1995. Particle swarm optimization//Proceedings of the ICNN'95-International Conference on Neural Networks. Perth, Australia: IEEE: 1942-1948 [DOI: 10.1109/ICNN.1995.488968http://dx.doi.org/10.1109/ICNN.1995.488968]
Kurakin A, Goodfellow I J and Bengio S. 2017. Adversarial machine learning at scale//Proceedings of the 5th International Conference on Learning Representations. Toulon, France: OpenReview. net
Kuznetsova A, Rom H, Alldrin N, Uijlings, J, Krasin I, Pont-Tuset J, Kamali S, Popov S, Malloci M, Kolesnikov A, Duerig T and Ferrari V. 2020. The open images dataset V4: unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7): 1956-1981 [DOI: 10.1007/s11263-020-01316-z]
Law H and Deng J. 2018. CornerNet: detecting objects as paired keypoints//Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer: 765-781 [DOI: 10.1007/978-3-030-01264-9_45http://dx.doi.org/10.1007/978-3-030-01264-9_45]
LeCun Y, Bengio Y and Hinton G. 2015. Deep learning. Nature, 521(7553): 436-444 [DOI: 10.1038/nature14539]
Li H F, Li G B and Yu Y Z. 2020. ROSA: robust salient object detection against adversarial attacks. IEEE Transactions on Cybernetics, 50(11): 4835-4847 [DOI: 10.1109/tcyb.2019.2914099]
Li P C, Yi J F, Zhou B W and Zhang L J. 2019. Improving the robustness of deep neural networks via adversarial training with triplet loss//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China: IJCAI: 2909-2915 [DOI: 10.24963/ijcai.2019/403http://dx.doi.org/10.24963/ijcai.2019/403]
Li Y Z, Bian X and Lyu S W. 2018a. Attacking object detectors via imperceptible patches on background [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1809.05966v1.pdfhttps://arxiv.org/pdf/1809.05966v1.pdf
Li Y Z, Tian D, Chang M C, Bian X and Lyu S W. 2018b. Robust adversarial perturbation on deep proposal-based models//Proceedings of 2018 British Machine Vision Conference 2018. Newcastle, UK: BMVA Press
Liao Q Y, Wang X, Kong B, Lyu S W, Yin Y B, Song Q and Wu X. 2020. Category-wise attack: transferable adversarial examples for anchor free object detection [EB/OL]. [2021-02-24].https://arxiv.org/pdf/2003.04367.pdfhttps://arxiv.org/pdf/2003.04367.pdf
Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P and Zitnick C L. 2014. Microsoft COCO: common objects in context//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 740-755 [DOI: 10.1007/978-3-319-10602-1_48http://dx.doi.org/10.1007/978-3-319-10602-1_48]
Liu L, Ouyang W L, Wang X G, Fieguth P, Chen J, Liu X W and Pietikäinen M. 2020. Deep learning for generic object detection: a survey. International Journal of Computer Vision, 128(2): 261-318 [DOI: 10.1007/s11263-019-01247-4]
Liu S, Qi L, Qin H F, Shi J P and Jia J Y. 2018. Path aggregation network for instance segmentation//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE Computer Society: 8759-8768 [DOI: 10.1109/cvpr.2018.00913http://dx.doi.org/10.1109/cvpr.2018.00913]
Liu T, Zhao Y, Wei Y C, Zhao Y F and Wei S K. 2019a. Concealed object detection for activate millimeter wave image. IEEE Transactions on Industrial Electronics, 66(12): 9909-9917 [DOI: 10.1109/tie.2019.2893843]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y and Berg A C. 2016. SSD: single shot MultiBox detector//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer: 21-37 [DOI: 10.1007/978-3-319-46448-0_2http://dx.doi.org/10.1007/978-3-319-46448-0_2]
Liu W Y, Wen Y D, Yu Z D, Li M, Raj B and Song L. 2017. SphereFace: deep hypersphere embedding for face recognition//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6738-6746 [DOI: 10.1109/cvpr.2017.713http://dx.doi.org/10.1109/cvpr.2017.713]
Liu X, Yang H R, Liu Z W, Song L H, Li H and Chen Y R. 2019b. DPATCH: an adversarial patch attack on object detectors//Proceedings of Workshop on Artificial Intelligence Safety 2019 Co-located with the 33rd AAAI Conference on Artificial Intelligence 2019. Honolulu, USA: CEUR-WS. org
Lu J J, Sibai H and Fabry E. 2017. Adversarial examples that fool detectors [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1712.02494.pdfhttps://arxiv.org/pdf/1712.02494.pdf
Madry A, Makelov A, Schmidt L, Tsipras D and Vladu A. 2018. Towards deep learning models resistant to adversarial attacks//Proceedings of the 6th International Conference on Learning Representations. Vancouver, Canada: OpenReview. net
Modas A, Moosavi-Dezfooli S M and Frossard P. 2019. SparseFool: a few pixels make a big difference//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 9079-9088 [DOI: 10.1109/cvpr.2019.00930http://dx.doi.org/10.1109/cvpr.2019.00930]
Moosavi-Dezfooli S M, Fawzi A, Fawzi O and Frossard P. 2017. Universal adversarial perturbations//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 86-94 [DOI: 10.1109/CVPR.2017.17http://dx.doi.org/10.1109/CVPR.2017.17]
Moosavi-Dezfooli S M, Fawzi A and Frossard P. 2016. DeepFool: a simple and accurate method to fool deep neural networks//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 2574-2582 [DOI: 10.1109/cvpr.2016.282http://dx.doi.org/10.1109/cvpr.2016.282]
Pan W W, Wang X Y, Song M L and Chen C. 2020. Survey on generating adversarial examples. Journal of Software, 31(1): 67-81
潘文雯, 王新宇, 宋明黎, 陈纯. 2020. 对抗样本生成技术综. 软件学报, 31(1): 67-81 [DOI: 10.13328/j.cnki.jos.005884]
Redmon J, Divvala S, Girshick R and Farhadi A. 2016. You only look once: unified, real-time object detection//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 779-788 [DOI: 10.1109/CVPR.2016.91http://dx.doi.org/10.1109/CVPR.2016.91]
Redmon J and Farhadi A. 2017. YOLO9000: better, faster, stronger//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 6517-6525 [DOI: 10.1109/CVPR.2017.690http://dx.doi.org/10.1109/CVPR.2017.690]
Redmon J and Farhadi A. 2018. YOLOv3: an incremental improvement [EB/OL]. [2021-04-08].https://arxiv.org/pdf/1804.02767.pdfhttps://arxiv.org/pdf/1804.02767.pdf
Ren S H, Deng Y H, He K and Che W X. 2019. Generating natural language adversarial examples through probability weighted word saliency//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: ACL: 1085-1097 [DOI: 10.18653/v1/p19-1103http://dx.doi.org/10.18653/v1/p19-1103]
Ren S Q, He K M, Girshick R and Sun J. 2015. Faster R-CNN: towards real-time object detection with region proposal networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 91-99
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S A, Huang Z H, Karpathy A, Khosla A, Bernstein M, Berg A C and Li F F. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3): 211-252 [DOI: 10.1007/s11263-015-0816-y]
Saha A, Subramanya A, Patil K and Pirsiavash H. 2020. Role of spatial context in adversarial robustness for object detection//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA: IEEE: 3403-3412 [DOI: 10.1109/cvprw50498.2020.00400http://dx.doi.org/10.1109/cvprw50498.2020.00400]
Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D and Batra D. 2017. Grad-CAM: visual explanations from deep networks via gradient-based localization//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 618-626 [DOI: 10.1109/iccv.2017.74http://dx.doi.org/10.1109/iccv.2017.74]
Sharif M, Bhagavatula S, Bauer L and Reiter M K. 2016. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition//Proceedings of 2016 ACM SIGSAC Conference on Computer and Communications Security. Vienna, Austria: ACM: 1528-1540 [DOI: 10.1145/2976749.2978392http://dx.doi.org/10.1145/2976749.2978392]
Shetty S. 2016. Application of convolutional neural network for image classification on Pascal VOC challenge 2012 dataset [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1607.03785.pdfhttps://arxiv.org/pdf/1607.03785.pdf
Song C, Cheng H P, Yang H R, Li S C, Wu C P, Wu Q, Chen Y R and Li H. 2018a. MAT: a multi-strength adversarial training method to mitigate adversarial attacks//Proceedings of 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). Hong Kong, China: IEEE: 476-481 [DOI: 10.1109/isvlsi.2018.00092http://dx.doi.org/10.1109/isvlsi.2018.00092]
Song D, Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Tramèr F, Prakash Aand Kohno T. 2018b. Physical adversarial examples for object detectors//Proceedings of the 12th USENIX Workshop on Offensive Technologies. Baltimore, USA: USENIX Association
Su J, Vargas D V and Sakurai K. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5): 828-841 [DOI: 10.1109/TEVC.2019.2890858http://dx.doi.org/10.1109/TEVC.2019.2890858]
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I J and Fergus R. 2014. Intriguing properties of neural networks [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1312.6199v4.pdfhttps://arxiv.org/pdf/1312.6199v4.pdf
Thys S, Van Ranst W and Goedemé T. 2019. Fooling automated surveillance cameras: adversarial patches to attack person detection//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Long Beach, USA: IEEE: 49-55 [DOI: 10.1109/cvprw.2019.00012http://dx.doi.org/10.1109/cvprw.2019.00012]
Tian Z, Shen C H, Chen H and He T. 2019. FCOS: fully convolutional one-stage object detection//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 9626-9635 [DOI: 10.1109/iccv.2019.00972http://dx.doi.org/10.1109/iccv.2019.00972]
Tramèr F, Kurakin A, Papernot N, Goodfellow I J, Boneh D and McDaniel P D. 2018. Ensemble adversarial training: attacks and defenses//Proceedings of the 6th International Conference on Learning Representation. Vancouver, Canada: OpenReview. net: 1-20
Uijlings J R R, van de Sande K E A, Gevers T and Smeulders A W M. 2013. Selective search for object recognition. International Journal of Computer Vision, 104(2): 154-171 [DOI: 10.1007/s11263-013-0620-5]
Vincent P, Larochelle H, Bengio Y and Manzagol P A. 2008. Extracting and composing robust features with denoising autoencoders//Proceedings of the 25th International Conference on Machine Learning. Helsinki, Finland: ACM: 1096-1103 [DOI: 10.1145/1390156.1390294http://dx.doi.org/10.1145/1390156.1390294]
Viola P and Jones M J. 2004. Robust real-time face detection. International Journal of Computer Vision, 57(2): 137-154 [DOI: 10.1023/B:VISI.0000013087.49260.fb]
Wang D R, Li C R, Wen S, Han Q L, Nepal S, Zhang X Y and Xiang Y. 2021. Daedalus: breaking nonmaximum suppression in object detection via adversarial examples. IEEE Transactions on Cybernetics [DOI: 10.1109/tcyb.2020.3041481http://dx.doi.org/10.1109/tcyb.2020.3041481]
Wang Q L, Guo W B, Ororbia II A G, Xing X Y, Lin L, Giles C L, Liu X, Liu P and Xiong G. 2016a. Using non-invertible data transformations to build adversary-resistant deep neural networks [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1610.01934v4.pdfhttps://arxiv.org/pdf/1610.01934v4.pdf
Wang Q L, Guo W B, Zhang K X, Xing X Y, Giles C L and Liu X. 2016b. Random feature nullification for adversary resistant deep architecture [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1610.01239v3.pdfhttps://arxiv.org/pdf/1610.01239v3.pdf
Wang X Y, Han T X and Yan S C. 2009. An HOG-LBP human detector with partial occlusion handling//Proceedings of the 12th IEEE International Conference on Computer Vision. Kyoto, Japan: IEEE: 32-39 [DOI: 10.1109/ICCV.2009.5459207http://dx.doi.org/10.1109/ICCV.2009.5459207]
Wang Y J, Tan Y A, Zhang W J, Zhao Y H and Kuang X H. 2020. An adversarial attack on DNN-based black-box object detectors. Journal of Network and Computer Applications, 161: #102634 [DOI: 10.1016/j.jnca.2020.102634]
Wei X X, Liang S Y, Chen N and Cao X C. 2019. Transferable adversarial attacks for image and video object detection//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China: IJCAI: 954-960 [DOI: 10.24963/ijcai.2019/134http://dx.doi.org/10.24963/ijcai.2019/134]
Wu X, Huang L F and Gao C Y. 2019. G-UAP: generic universal adversarial perturbation that fools RPN-based detectors//Proceedings of the 11th Asian Conference on Machine Learning. Nagoya, Japan: PMLR: 1204-1217
Xiang C and Mittal P. 2021. DetectorGuard: provably securing object detectors against localizedpatch hiding attacks//Proceedings of 2021 ACM SIGSAC Conference on Computer and Communications Security. Virtual Event: ACM: 3177-3196 [DOI: 10.1145/3460120.3484757http://dx.doi.org/10.1145/3460120.3484757]
Xiao C W, Li B, Zhu J Y, He W, Liu M Y and Song D. 2018. Generating adversarial examples with adversarial networks//Proceedings of the 27th International Joint Conference on Artificial Intelligence. Stockholm, Sweden: IJCAI: 3905-3911 [DOI: 10.24963/ijcai.2018/543http://dx.doi.org/10.24963/ijcai.2018/543]
Xie C H, Wang J Y, Zhang Z S, Zhou Y Y, Xie L X and Yuille A. 2017. Adversarial examples for semantic segmentation and object detection//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1378-1387 [DOI: 10.1109/iccv.2017.153http://dx.doi.org/10.1109/iccv.2017.153]
Xu W P, Huang H C and Pan S Y. 2021. Using feature alignment can improve clean average precision and adversarial robustness in object detection//Proceedings of 2021 IEEE International Conference on Image Processing (ICIP). Anchorage, USA: IEEE: 2184-2188 [DOI: 10.1109/ICIP42928.2021.9506689http://dx.doi.org/10.1109/ICIP42928.2021.9506689]
Yan B, Wang D, Lu H C and Yang X Y. 2020. Cooling-shrinking attack: blinding the tracker with imperceptible noises//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 987-996 [DOI: 10.1109/cvpr42600.2020.00107http://dx.doi.org/10.1109/cvpr42600.2020.00107]
Yuan X Y, He P, Zhu Q L and Li X L. 2019. Adversarial examples: attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems, 30(9): 2805-2824 [DOI: 10.1109/tnnls.2018.2886017]
Zhang H C and Wang J Y. 2019. Towards adversarially robust object detection//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 421-430 [DOI: 10.1109/iccv.2019.00051http://dx.doi.org/10.1109/iccv.2019.00051]
Zhang H T, Zhou W G and Li H Q. 2020. Contextual adversarial attacks for object detection//Proceedings of 2020 IEEE International Conference on Multimedia and Expo (ICME). London, UK: IEEE: 1-6 [DOI: 10.1109/icme46284.2020.9102805http://dx.doi.org/10.1109/icme46284.2020.9102805]
Zhang H Y, Cissé M, Dauphin Y N and Lopez-Paz D. 2018. mixup: beyond empirical risk minimization//Proceedings of the 6th International Conference on Learning Representations. Vancouver, Canada: OpenReview. net: 1-13
Zhou X Y, Wang D Q and Krähenbühl P. 2019a. Objects as points [EB/OL]. [2021-02-24].https://arxiv.org/pdf/1904.07850.pdfhttps://arxiv.org/pdf/1904.07850.pdf
Zhou X Y, Zhuo J C and Krähenbühl P. 2019b. Bottom-up object detection by grouping extreme and center points//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 850-859 [DOI: 10.1109/cvpr.2019.00094http://dx.doi.org/10.1109/cvpr.2019.00094]
相关作者
相关机构