Review of physical adversarial attacks against visual deep learning models
- Pages: 1-39(2024)
Published Online: 23 December 2024
DOI: 10.11834/jig.240442
移动端阅览
浏览全部资源
扫码关注微信
Published Online: 23 December 2024 ,
移动端阅览
彭振邦,张瑜,党一等.针对视觉深度学习模型的物理对抗攻击研究综述[J].中国图象图形学报,
Peng Zhenbang,Zhang Yu,Dang Yi,et al.Review of physical adversarial attacks against visual deep learning models[J].Journal of Image and Graphics,
基于深度学习模型的计算机视觉技术经过十余年的研究目前已经取得较大的进步,大量成熟的深度学习模型因其领先于传统模型的高精度、快速性特点被广泛用于计算机视觉相关的各类关键领域中。然而,研究者发现,向原始图像样本中添加精心设计的微小扰动可显著地干扰深度学习模型的决策结果。这种精心设计的对抗攻击引发了人们对于深度学习模型鲁棒性和可信赖程度的担忧。值得注意的是,一些研究者以日常生活中常见的实体或自然现象为载体,设计了可于实际应用场景中实施的物理对抗攻击。这种具备较高实用性的对抗攻击不仅能够较好地欺骗人类观察者,同时对深度学习模型产生显著的干扰作用,因而具备更实际的威胁性。为充分认识物理对抗攻击对基于深度学习模型的计算机视觉技术的实际应用带来的挑战,本文依据物理对抗攻击设计的一般性流程,对所整理的114篇论文设计的物理对抗攻击方法进行了归纳总结。具体而言,本文首先依据物理对抗攻击的建模方法对现有工作进行归纳总结。随后对物理对抗攻击优化约束和增强方法进行概述,并对现有工作的物理对抗攻击实施与评估方案进行总结。最后,本文对现有物理对抗攻击所面临的挑战和具备较大潜力的研究方向进行了分析与展望。我们希望能为高质量的物理对抗样本生成方法设计和可信赖的深度学习模型研究提供有参考意义的启发,综述主页将展示在
https://github.com/Arknightpzb/Survey-of-Physical-adversarial-attack
https://github.com/Arknightpzb/Survey-of-Physical-adversarial-attack
。
Deep learning has revolutionized the field of computer vision over the past two decades, bringing unprecedented advancements in both accuracy and speed. These developments are vividly reflected in fundamental tasks like image classification and object detection, where deep learning models have consistently outperformed traditional machine learning techniques. The superior performance of these models has led to their widespread adoption across various critical applications, including facial recognition, pedestrian detection, and remote sensing for earth observation. As a result, deep learning-based computer vision technologies are increasingly becoming indispensable for the continuous evolution and enhancement of intelligent vision systems.However, despite these remarkable achievements, the robustness and reliability of deep learning models have come under scrutiny due to their vulnerability to adversarial attacks. Researchers have discovered that by introducing carefully designed perturbations—subtle modifications that
may be imperceptible to the human eye—it is possible to significantly disrupt the decision-making processes of these models. These adversarial attacks are not just theoretical constructs. They have practical implications that could potentially undermine the trustworthiness of deep learning systems deployed in real-world scenarios.One of the most concerning developments in this area is the emergence of physical adversarial attacks. Unlike their digital counterparts, physical adversarial attacks involve perturbations that can be applied in the real world using common objects or natural phenomena encountered in daily life. For instance, a strategically placed sticker on a road sign might cause an autonomous vehicle’s vision system to misinterpret the sign, leading to potentially dangerous consequences. These attacks are particularly worrisome because they can deceive not only deep learning models but also human observers, thus posing a more realistic and severe threat to the integrity of computer vision systems.In light of the growing significance of physical adversarial attacks, this paper aims to provide a comprehensive review of the state-of-the-art in this field. By analyzing 114 selected papers, we seek to offer a detailed summary of the methods used to design physical adversarial attacks, focusing on the general designing process that researchers follow. This process can be broadly divided into three stages: the mathematical modeling of physical adversarial attacks, the design of performance optimization processes, and the development of implementation and evaluation schemes. In the first stage, mathematical modeling, researchers aim to define the problem and establish a framework for generating adversarial examples in the physical world. This involves understanding the underlying principles that make these attacks effective and exploring how physical characteristics, such as texture, lighting, and perspective, can be manipulated to create adversarial examples. Within this stage, we categorize existing attacks
into three main types based on their application forms: 2D adversarial examples, 3D adversarial examples, and adversarial light and shadow projection. 2D adversarial examples typically involve altering the surface of an object, such as applying a printed pattern or sticker, to fool a computer vision model. These attacks are often used in scenarios like natural image recognition and facial recognition, where the goal is to create perturbations that are inconspicuous in real-world settings but highly disruptive to machine learning algorithms. 3D adversarial examples take this concept further by considering the three-dimensional structure of objects. For example, modifying the shape or surface of a physical object can create adversarial examples that remain effective from multiple angles and under varying lighting conditions. Adversarial light and shadow projection represents another innovative approach, where the manipulation of light sources or shadows is used to create perturbations. These attacks are often more challenging to detect and defend against because they do not require any physical alteration of the object itself. Instead, they exploit the way light interacts with surfaces to generate adversarial effects. This method has shown potential in both indoor and outdoor scenarios. We also introduce their applications in five major scenarios: natural image recognition, facial image recognition, autonomous driving, pedestrian detection, and remote sensing. In the performance optimization process design phase, we believe that existing adversarial attacks mainly face two core problems: reality bias and the high degree of freedom observation. We have introduced some solutions and key technologies for these two core problems in existing work. In the design of implementation and evaluation schemes, we introduced the platforms and indicators used in existing work to evaluate the interference performance of physical adversarial examples.Finally, we discussed the highly promising research directions in physical adver
sarial attacks, particularly in the context of intelligent systems based on large models and embodied intelligence. This area of exploration could reveal critical insights into how these sophisticated systems, which combine extensive data processing capabilities with interactive and adaptive behaviors, can be compromised by physical adversarial attacks. Additionally, there is significant potential in studying physical adversarial attacks on hierarchical detection systems that integrate data from multiple sources and platforms. Understanding the vulnerabilities of such complex, layered systems could lead to more robust and resilient designs. Finally, the prospects of advancing defense technology against physical adversarial attacks are crucial. Developing comprehensive and effective defense mechanisms will be essential for ensuring the security and reliability of intelligent systems in real-world applications. We hope to provide meaningful insights for the design of high-quality physical adversarial example generation methods and the research of reliable deep learning models. The review homepage is available at
https://github.com/Arknightpzb/Survey-of-Physical-adversarial-attack
https://github.com/Arknightpzb/Survey-of-Physical-adversarial-attack
.
物理对抗攻击一般性设计流程对抗样本实用性深度学习计算机视觉
physical adversarial attacksgeneral designing processpracticality of adversarial examplesdeep learningcomputer vision
Akhtar N, Jalwana M A, Bennamoun M and Mian A. 2021. Attack to fool and explain deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 5980-5995 [DOI: 10.1109/TPAMI.2021.3083769http://dx.doi.org/10.1109/TPAMI.2021.3083769].
Athalye A, Engstrom L, Ilyas A, Kwok K and Zhang H. 2018. Synthesizing robust adversarial examples//International Conference on Machine Learning. PMLR: 284-293.
Bookstein F L. 1989. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(6): 567-585 [DOI: 10.1109/34.24792http://dx.doi.org/10.1109/34.24792].
Brendel W, Rauber J, Kümmerer M, Ustyuzhaninov I and Bethge M. 2019. Accurate, reliable and fast robustness evaluation. Advances in Neural Information Processing Systems: 32.
Brown T B, Mané D, Roy A, Abadi M and Gilmer J. 2017. Adversarial patch[EB/OL].[2024-07-21]. https://arxiv.org/pdf/1712.09665.pdfhttps://arxiv.org/pdf/1712.09665.pdf
Cai C, Wang Y, Zhang L, Zhou S, Zhang J and Hu Y. 2023. Adversarial attacks on face recognition system in physical domain. Journal of Cyber Security, 8(2): 127-137.
蔡楚鑫, 王宇飞, 章烈剽, 卓思超, 张娟苗, 胡永健. 2023. 物理域中针对人脸识别系统的对抗样本攻击方法. 信息安全学报, 8(2): 127-137.[DOI: 10.19363/J.cnki.cn10-1380/tn.2023.03.10http://dx.doi.org/10.19363/J.cnki.cn10-1380/tn.2023.03.10]
Cai W, Di X, Jiang X, Wang X and Gao W. 2024. Survey of physical adversarial attacks against object detection models. Computer Engineering and Applications, 60(10): 61-75.
蔡伟, 狄星雨, 蒋昕昊, 王鑫高, 蔚洁. 2024. 针对目标检测模型的物理对抗攻击综述. 计算机工程与应用, 60(10): 61-75.[DOI: 10.3778/j.issn.1002-8331.2310-0362http://dx.doi.org/10.3778/j.issn.1002-8331.2310-0362]
Cai Z and Vasconcelos N. 2018. Cascade r-cnn: delving into high quality object detection//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 6154-6162[DOI: 10.1109/CVPR.2018.00644].
Cao Y, Xiao C, Yang D, Fang J, Yang R, Liu M and Li B. 2019. Adversarial objects against lidar-based autonomous driving systems[EB/OL].[2024-07-21]. https://arxiv.org/pdf/1907.05418.pdfhttps://arxiv.org/pdf/1907.05418.pdf
Carlini N and Wagner D. 2017. Towards evaluating the robustness of neural networks//2017 IEEE Symposium on Security and Privacy: 39-57[DOI: 10.1109/SP.2017.49].
Chen J, Chen Z, Zheng H, Shen S and Su M. 2020. Black-box adversarial attack against road sign recognition model via pso. Journal of Software, 31(9): 2785-2801.
陈晋音, 陈治清, 郑海斌, 沈诗婧, 苏蒙蒙. 2020. 基于PSO的路牌识别模型黑盒对抗攻击方法. 软件学报, 31(9): 2785-2801.[DOI: 10.13328/j.cnki.jos.005945http://dx.doi.org/10.13328/j.cnki.jos.005945.]
Chen J, Zhang Y, Liu C, Chen K, Zou Z and Shi Z. 2024. Digital-to-physical visual consistency optimization for adversarial patch generation in remote sensing scenes. IEEE Transactions on Geoscience and Remote Sensing [DOI: 10.1109/TGRS.2024.3397678].
Chen J, Zhao X, Zheng H and Guo H. 2024. Survey of optical-based physical domain adversarial attacks and defense. Chinese Journal of Network & Information Security, 10(2): 1-21.
陈晋音, 赵晓明, 郑海斌, 郭海锋. 2024. 基于光学的物理域对抗攻防综述. 网络与信息安全学报, 10(2): 1-21.[DOI: 10.11959/j.issn.2096-109x.2024026http://dx.doi.org/10.11959/j.issn.2096-109x.2024026]
Chen P Y, Zhang H, Sharma Y, Yi J and Hsieh C J. 2017. Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security: 15-26[DOI: 10.1145/3128572.3140448].
Chen S T, Cornelius C, Martin J and Chau D H. 2019. Shapeshifter: robust physical adversarial attack on faster r-cnn object detector//Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18. Springer International Publishing: 52-68 [DOI: 10.1007/978-3-030-10925-7_4].
Chen X, Gu J, Yan K, Jiang D, Xu L and Fu Z. 2023. Double adversarial attack against license plate recognition system. Chinese Journal of Network & Information Security, 9(3): 16-27.
陈先意, 顾军, 颜凯, 江栋, 许林峰, 付章杰. 2023. 针对车牌识别系统的双重对抗攻击. 网络与信息安全学报, 9(3): 16-27.[DOI:10.11959/j.issn.2096-109x.2023034http://dx.doi.org/10.11959/j.issn.2096-109x.2023034.]
Cheng Z, Liang J, Choi H, Tao G, Cao Z, Liu D and Zhang X2022. Physical attack on monocular depth estimation with optimal adversarial patches//European Conference on Computer Vision. Cham: Springer Nature Switzerland: 514-532 [DOI: 10.1007/978-3-031-19839-7_30].
Connor C E, Egeth H E and Yantis S. 2004. Visual attention: bottom-up versus top-down. Current Biology, 14(19): R850-R852 [DOI: 10.1016/j.cub.2004.09.041http://dx.doi.org/10.1016/j.cub.2004.09.041].
Cui T, Zhang W, He Z, Zhou X, Zhang Y, Hu G and Pan Z. 2023. Research on face recognition adversarial patch for eye mask. Computer Technology and Development, 33(6): 139-146.
崔廷玉, 张武, 贺正芸, 周星宇, 张瑶, 胡谷雨, 潘志松. 2023. 针对眼部掩模的人脸识别对抗贴片研究. 计算机技术与发展, 33(6): 139-146.[DOI: 10.3969/j.issn.1673-629X.2023.06.021http://dx.doi.org/10.3969/j.issn.1673-629X.2023.06.021]
Ding L, Wang Y, Yuan K, Jiang M, Wang P, Huang H and Wang Z J. 2021. Towards universal physical attacks on single object tracking//Proceedings of the AAAI Conference on Artificial Intelligence, 35(2): 1236-1245 [DOI: 10.1609/AAAI.v35i2.16211].
Doan B G, Xue M, Ma S, Abbasnejad E and Ranasinghe D C. 2022. Tnt attacks! universal naturalistic adversarial patches against deep neural network systems. IEEE Transactions on Information Forensics and Security, 17: 3816-3830 [DOI: 10.1109/TIFS.2022.3198857http://dx.doi.org/10.1109/TIFS.2022.3198857].
Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X and Li J. 2018. Boosting adversarial attacks with momentum//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 9185-9193 [DOI: 10.1109/CVPR.2018.00957].
Dong Y, Zhu J and Gao X S. 2022. Isometric 3d adversarial examples in the physical world. Advances in Neural Information Processing Systems, 35: 19716-19731.
Du A, Chen B, Chin T J, Law Y W, Sasdelli M, Rajasegaran R and Campbell D. 2022. Physical adversarial attacks on an aerial imagery object detector//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022: 1796-1806 [DOI: 10.1109/WACV51458.2022.00385].
Duan R, Ma X, Wang Y, Bailey J, Qin A K and Yang Y. 2020. Adversarial camouflage: hiding physical-world attacks with natural styles//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 1000-1008 [DOI: 10.1109/CVPR42600.2020.00108].
Duan R, Mao X, Qin A K, Chen Y, Ye S, He Y and Yang Y. 2021. Adversarial laser beam: effective physical-world attack to dnns in a blink//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 16062-16071 [DOI: 10.1109/CVPR46437.2021.01580].
Duan Y, Chen J, Zhou X, Zou J, He Z, Zhang J, Zhang W, Pan Z. 2021. Learning coated adversarial camouflages for object detectors[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2109.00124.pdfhttps://arxiv.org/pdf/2109.00124.pdf
Duan Y, He Z, Zhang S, Zhan D, Wang T, Lin G, Zhang J and Pan Z. 2024. Robust physical adversarial camouflages for image classifiers. Acta Electronica Sinica, 52(03): 863-871.
段晔鑫, 贺正芸, 张颂, 詹达之, 王田丰, 林庚右, 张锦, 潘志松. 2024. 针对图像分类的鲁棒物理域对抗伪装. 电子学报, 52(03): 863-871.[DOI: 10.12263/DZXB.20221301http://dx.doi.org/10.12263/DZXB.20221301]
Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T and Song D. 2018. Robust physical-world attacks on deep learning visual classification//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 1625-1634 [DOI: 10.1109/CVPR.2018.00175].
Fang J, Jiang C, Jiang Y, Lin P, Chen Z, Sun Y, Yiu S M and Jiang Z L. 2024. Imperceptible physical attack against face recognition systems via led illumination modulation. IEEE Transactions on Big Data: 1-13 [DOI: 10.1109/TBData.2024.3403377http://dx.doi.org/10.1109/TBData.2024.3403377].
Gatys L A, Ecker A S and Bethge M. 2016. Image style transfer using convolutional neural networks//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 2414-2423 [DOI: 10.1109/CVPR.2016.265].
Gnanasambandam A, Sherman A M and Chan S H. 2021. Optical adversarial attack//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops: 92-101 [DOI: 10.1109/ICCVW54120.2021.00016].
Goodfellow I J, Shlens J and Szegedy C. 2015. Explaining and harnessing adversarial examples//Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: [s. n. ]
Guesmi A, Bilasco I M, Shafique M and Alouani I. 2023. Advart: adversarial art for camouflaged object detection attacks[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2303.01734.pdfhttps://arxiv.org/pdf/2303.01734.pdf
Guesmi A, Ding R, Hanif M A, Alouani I and Shafique M. 2024. Dap: a dynamic adversarial patch for evading person detectors. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 24595-24604.
He K, Gkioxari G, Dollár P and Girshick R. 2017. Mask r-cnn//Proceedings of the IEEE/CVF International Conference on Computer Vision: 2961-2969.
Hingun N, Sitawarin C, Li J, and Wagner D. 2023. Reap: a large-scale realistic adversarial patch benchmark//Proceedings of the IEEE/CVF International Conference on Computer Vision: 4640-4651 [DOI: 10.1109/ICCV51070.2023.00428].
Hsiao T F, Huang B L, Ni Z X, Lin Y T, Shuai H H, Li Y H and Cheng W H. 2024. Natural light can also be dangerous: traffic sign misinterpretation under adversarial natural light attacks//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision: 3915-3924 [DOI: 10.1109/WACV57701.2024.00387].
Hu C, Shi W, Jiang T, Yao W, Tian L, Chen X, Zhou J and Li W. 2024a. Adversarial infrared blocks: a multi-view black-box attack to thermal infrared detectors in physical world. Neural Networks, 175: 106310 [DOI: 10.1016/j.neunet.2024.106310].
Hu C, Shi W, Jiang T, Yao W, Tian L, Chen X, Zhou J and Li W. 2024b. Adversarial infrared curves: an attack on infrared pedestrian detectors in the physical world. Neural Networks, 106459 [DOI: 10.1016/j.neunet.2024.106459].
Hu C, Wang Y, Tiliwalidi K and Li W. 2023. Adversarial laser spot: robust and covert physical-world attack to dnns//Asian Conference on Machine Learning. PMLR: 483-498.
Hu Y C T, Kung B H, Tan D S, Chen J C, Hua K L and Cheng W H. 2021. Naturalistic physical adversarial patch for object detectors//Proceedings of the IEEE/CVF International Conference on Computer Vision: 7848-7857.
Hu Z, Huang S, Zhu X, Sun F, Zhang B and Hu X. 2022. Adversarial texture for fooling person detectors in the physical world//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 13307-13316 [DOI: 10.1109/CVPR52688.2022.01295].
Hu Z, Chu W, Zhu X, Zhang H, Zhang B and Hu X. 2023. Physically realizable natural-looking clothing textures evade person detectors via 3d modeling//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 16975-16984 [DOI: 10.1109/cvpr52729.2023.01628].
Huang B and Ling H. 2022. Spaa: stealthy projector-based adversarial attacks on deep image classifiers//2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE: 534-542 [DOI: 10.1109/VR51125.2022.00073].
Huang L, Gao C, Zhou Y, Xie C, Yuille A L, Zou C and Liu N. 2020. Universal physical camouflage attacks on object detectors//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 720-729 [DOI: 10.1109/CVPR42600.2020.00080].
Huang S, Ye F, Huang T, Li W, Huang L and Luo H. 2023. Survey on adversarial attacks and defense of face forgery and detection. Chinese Journal of Network & Information Security, 9(4): 1-15.
黄诗瑀, 叶锋, 黄添强, 李伟, 黄丽清, 罗海峰. 2023. 人脸伪造与检测中的对抗攻防综述. 网络与信息安全学报, 9(4): 1-15.[DOI: 10.11959/j.issn.2096-109x.2023049http://dx.doi.org/10.11959/j.issn.2096-109x.2023049]
Huang Y, Dong Y, Ruan S, Yang X, Su H and Wei X. 2024. Towards transferable targeted 3d adversarial attack in the physical world//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 24512-24522.
Jaderberg M, Simonyan K and Zisserman A. 2015. Spatial transformer networks. Advances in Neural Information Processing Systems, 28: 2017-2025 [DOI: 10.5555/2969033.2969112http://dx.doi.org/10.5555/2969033.2969112].
Jan S T, Messou J, Lin Y C, Huang J B and Wang G. 2019. Connecting the digital and physical world: improving the robustness of adversarial attacks//Proceedings of the AAAI Conference on Artificial Intelligence, 33(01): 962-969 [DOI: 10.1609/AAAI.v33i01.3301962].
Komkov S and Petiushko A. 2021. Advhat: Real-world adversarial attack on arcface face id system//Proceedings of the 25th International Conference on Pattern Recognition. IEEE: 819-826 [DOI: 10.1109/ICPR48806.2021.9412236].
Kong Z, Guo J, Li A and Liu C. 2020. Physgan: generating physical-world-resilient adversarial examples for autonomous driving. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 14254-14263 [DOI: 10.1109/CVPR42600.2020.01426http://dx.doi.org/10.1109/CVPR42600.2020.01426].
Kurakin A, Goodfellow I J and Bengio S. 2017. Adversarial machine learning at scale//Proceedings of the 5th International Conference on Learning Representations. Toulon, France: OpenReview.net.
Kurakin A, Goodfellow I J and Bengio S. 2018. Adversarial examples in the physical world//Artificial Intelligence Safety and Security. Chapman and Hall/CRC: 99-112 [DOI: 10.1201/9781351251389-8].
Li J, Schmidt F and Kolter Z. 2019. Adversarial camera stickers: A physical camera-based attack on deep learning systems//International Conference on Machine Learning. PMLR: 3896-3904.
Li L, Qing L and Chen Y C. 2023. Adv3d: generating 3adversarial examples ford 3d object detection in driving scenarios with nerf [EB/OL].[2023-09-10]. https://arxiv.org/pdf/2309.01351.pdfhttps://arxiv.org/pdf/2309.01351.pdf
Li S, Zhang S, Chen G, Wang D, Feng P, Wang J, Liu A, Yi X, Liu X 2023. Towards benchmarking and assessing visual naturalness of physical world adversarial attacks//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 12324-12333 [DOI: 10.1109/CVPR52729.2023.01186].
Li Y, Li Y, Dai X, Guo S and Xiao B. 2023. Physical-world optical adversarial attacks on 3d face recognition//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 24699-24708 [DOI: 10.1109/cvpr52729.2023.02366].
Lian J, Mei S, Zhang S and Ma M. 2022. Benchmarking adversarial patch against aerial detection. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-16, noArt. 5634616 [DOI: 10.1109/TGRS.2022.3225306].
Liu A, Guo J, Wang J, Liang S, Tao R, Zhou W, Liu C, Liu X, Tao D2023. X-adv: physical adversarial object attacks against x-ray prohibited item detection//32nd USENIX Security Symposium: 3781-3798.
Liu A, Liu X, Fan J, Ma Y, Zhang A, Xie H and Tao D. 2019a. Perceptual-sensitive gan for generating adversarial patches//Proceedings of the AAAI Conference on Artificial Intelligence. 33(01): 1028-1035 [DOI: 10.1609/AAAI.v33i01.33011028].
Liu A, Wang J, Liu X, Cao B, Zhang C and Yu H. 2020. Bias-based universal adversarial patch attack for automatic check-out//Proceedings of the 16th European Conference on Computer Vision. Glasgow: Springer International Publishing: 139-154 [DOI: 10.1007/978-3-030-58607-2_9http://dx.doi.org/10.1007/978-3-030-58607-2_9]
Liu D, Yu R and Su H. 2019b. Extending adversarial attacks and defenses to deep 3d point cloud classifiers//2019 IEEE International Conference on Image Processing: 2279-2283 [DOI: 10.1109/ICIP.2019.8803770].
Lin T Y, Goyal P, Girshick R, He K and Dollar P. 2017. Focal loss for dense object detection//Proceedings of the IEEE/CVF International Conference on Computer Vision: 2980-2988 [DOI: 10.1109/ICCV.2017.324].
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S E, Fu C Y and Berg A C. 2016. Ssd: single shot multibox detector// Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer International Publishing: 21-37 [DOI: 10.1007/978-3-319-46448-0_2].
Lou T, Jia X, Gu J, Liu L, Liang S, He B and Cao X. 2024. Hide in thicket: generating imperceptible and rational adversarial perturbations on 3d point clouds//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 24326-24335.
Lovisotto G, Turner H, Sluganovic I, Strohmeier M and Martinovic I. 2021. SLAP: improving physical adversarial examples with short-lived adversarial perturbations//30th USENIX Security Symposium: 1865-1882.
Lu J, Sibai H and Fabry E. 2017a. Adversarial examples that fool detectors[EB/OL].[2024-07-21]. https://arxiv.org/pdf/1712.02494.pdfhttps://arxiv.org/pdf/1712.02494.pdf
Lu J, Sibai H, Fabry E and Forsyth D. 2017b. No need to worry about adversarial examples in object detection in autonomous vehicles[EB/OL].[2024-07-21]. https://arxiv.org/pdf/1707.03501.pdfhttps://arxiv.org/pdf/1707.03501.pdf
Luo Y, Boix X, Roig G, Poggio T and Zhao Q. 2015. Foveation-based mechanisms alleviate adversarial examples[EB/OL].[2024-07-21]. https://arxiv.org/pdf/1511.06292.pdfhttps://arxiv.org/pdf/1511.06292.pdf
Lyu Y, Jiang Y, He Z, Peng B, Liu Y and Dong J. 2023. 3d-aware adversarial makeup generation for facial privacy protection. IEEE Transactions on Pattern Analysis and Machine Intelligence [DOI: 10.1109/TPAMI.2023.3290175].
Madry A, Makelov A, Schmidt L, Tsipras D and Vladu A. 2018. Towards deep learning models resistant to adversarial attacks//Proceedings of the International Conference on Learning Representations [DOI: 10.48550/arXiv.1706.06083http://dx.doi.org/10.48550/arXiv.1706.06083]
Mahendran A and Vedaldi A. 2015. Understanding deep image representations by inverting them//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 5188-5196 [DOI: 10.1109/CVPR.2015.7299155].
Moosavi-Dezfooli S M, Fawzi A and Frossard P. 2016. Deepfool: a simple and accurate method to fool deep neural networks//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 2574-2582 [DOI: 10.1109/CVPR.2016.282].
Nesti F, Rossolini G, Nair S, Biondi A and Buttazzo G. 2022. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision: 2280-2289 [DOI: 10.1109/WACV51458.2022.00288].
Neuhold G, Ollmann T, Bulò S R and Kontschieder P. 2017. The mapillary vistas dataset for semantic understanding of street scenes//Proceedings of the IEEE/CVF International Conference on Computer Vision:4990-4999 [DOI: 10.1109/ICCV.2017.534].
Nguyen D L, Arora S S, Wu Y and Yang H. 2020. Adversarial light projection attacks on face recognition systems: a feasibility study//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: 814-815 [DOI: CVPRW50498.2020.00415].
Nie W, Guo B, Huang Y, Xiao C, Vahdat A and Anandkumar A. 2022. Diffusion models for adversarial purification//International Conference on Machine Learning. PMLR: 16805-16827.
Pautov M, Melnikov G, Kaziakhmedov E, Kireev K and Petiushko A. 2019. On adversarial patches: real-world attack on arcface-100 face recognition system//Proceedings of the 2019 International Multi-Conference on Engineering, Computer and Informa- tion Sciences. IEEE: 100-105 [DOI: 10.1109/SIBIRCON48586.2019.8958134].
Qian Y, Liu X, Gu Z, Wang B, Pan J and Zhang X. 2020. Qr code based patch attacks in physical world. Journal of Cyber Security, 5(6): 75-86.
钱亚冠, 刘新伟, 顾钊铨, 王滨, 潘俊, 张锡敏. 2020. 一种基于二维码对抗样本的物理补丁攻击.信息安全学报, 5(6): 75-86.[DOI: 10.19363/J.cnki.cn10-1380/tn.2020.11.07http://dx.doi.org/10.19363/J.cnki.cn10-1380/tn.2020.11.07]
Qian Y, Ma D, Wang B, Pan J, Wang J, Gu Z, Chen J, Zhou W and Lei J. 2020. Spot evasion attacks: adversarial examples for license plate recognition systems with convolutional neural networks. Computers & Security, 95: 101826 [DOI: 10.1016/j.cose.2020.101826].
Qu Z, Yin Q, Sheng Z, Wu J, Zhang B, Yu S and Lu W. 2024. Overview of Deepfake proactive defense techniques. Journal of Image and Graphics, 29(2): 318-342.
瞿左珉, 殷琪林, 盛紫琦, 吴俊彦, 张博林, 余尚戎, 卢伟. 2024. 人脸深度伪造主动防御技术综述.中国图象图形学报, 29(2): 318-342.
Redmon J, Divvala S, Girshick R and Farhadi A. 2016. You only look once: unified, real-time object detection//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2016: 779-788 [DOI: 10.1109/CVPR.2016.91].
Redmon J and Farhadi A. 2018. YOLOv3: An incremental improvement[EB/OL].[2024-07-21]. https://arxiv.org/pdf/1804.02767.pdfhttps://arxiv.org/pdf/1804.02767.pdf
Ren S, He K, Girshick R and Sun J. 2015. Faster r-cnn: towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28: 91-99.
Rudin L I, Osher S and Fatemi E. 1992. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4): 259-268 [DOI: 10.1016/0167-2789(92)90242-fhttp://dx.doi.org/10.1016/0167-2789(92)90242-f].
Sato T, Bhupathiraju S H V, Clifford M, Sugawara T, Chen Q A, Rampazzi S. 2024. Invisible reflections: leveraging infrared laser reflections to target traffic sign perception[EB/OL]. [2024-07-24]. https://arxiv.org/pdf/2401.03582.pdfhttps://arxiv.org/pdf/2401.03582.pdf
Sayles A, Hooda A, Gupta M, Chatterjee R and Fernandes E. 2021. Invisible perturbations: physical adversarial examples exploiting the rolling shutter effect//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 14666-14675 [DOI: 10.1109/CVPR46437.2021.01443].
Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D and Batra D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization//Proceedings of the IEEE/CVF International Conference on Computer Vision. Venice, Italy: IEEE: 618-626 [DOI: 10.1109/ICCV.2017.74].
Sharif M, Bhagavatula S, Bauer L and Reiter M K. 2016. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security: 1528-1540 [DOI: 10.1145/2976749.2978392].
Sharif M, Bhagavatula S, Bauer L and Reiter M K. 2019. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security, 22(3): 1-30[DOI: 10.1145/3317611http://dx.doi.org/10.1145/3317611].
Sharma G, Wu W and Dalal E N. 2005. The CIEDE2000 color‐difference formula: implementation notes, supplementary test data and mathematical observations. Color Research & Application: Endorsed by Inter‐Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur, 30(1): 21-30 [DOI: 10.1002/col.20070http://dx.doi.org/10.1002/col.20070].
Shi Y, Aggarwal D and Jain A K. 2021. Lifting 2d stylegan for 3d-aware face generation//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 6258-6266 [DOI: 10.1109/CVPR46437.2021.00619].
Singh I, Araki T and Kakizaki K. 2022. Powerful physical adversarial examples against practical face recognition systems//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision: 301-310 [DOI: 10.1109/WACVW54805.2022.00036].
Song D, Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Tramer F, Prakash A and Kohno T. 2018. Physical adversarial examples for object detectors//Proceedings of the 12th USENIX Workshop on Offensive Technologies. Baltimore, MD: USENIX Association.
Suryanto N, Kim Y, Kang H, Larasati H T, Yun Y, Le TTH, Yang H, Oh S Y and Kim H. 2022. Dta: physical camouflage attacks using differentiable transformation network//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 15305-15314 [DOI: 10.1109/CVPR52688.2022.01487].
Suryanto N, Kim Y, Larasati H T, H Kang, Le TTH, Yang H, Hong H, Oh S Y and Kim H. 2023. Active: towards highly transferable 3d physical camouflage for universal and robust vehicle evasion//Proceedings of the IEEE/CVF International Conference on Computer Vision: 4305-4314 [DOI: 10.1109/ICCV51070.2023.00397].
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I and Fergus R. 2014. Intriguing properties of neural networks//Proceedings of the 5th International Conference on Learning Representations. Banff, AB, Canada: 14-16.
Tan J, Ji N, Xie H and Xiang X. 2021. Legitimate adversarial patches: evading human eyes and detection models in the physical world//Proceedings of the 29th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery: 5307-5315 [DOI: 10.1145/3474085.3475653].
Tang K, Wu J, Peng W, Shi Y, Song P, Gu Z, Tian Z and Wang W. 2023. Deep manifold attack on point clouds via parameter plane stretching//Proceedings of the AAAI Conference on Artificial Intelligence, 37(2): 2420-2428 [DOI: 10.1609/AAAI.v37i2.25338].
Thys S, Van Ranst W and Goedemé T. 2019. Fooling automated surveillance cameras: adversarial patches to attack person detection//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: 1-10 [DOI: 10.1109/CVPRW.2019.00012].
Wang D, Jiang T, Sun J, Zhou W, Gong Z, Zhang X, Yao W and Chen X. 2022. Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack//Proceedings of the AAAI Conference on Artificial Intelligence, 36(2): 2414-2422 [DOI: 10.1609/AAAI.v36i2.20141].
Wang D, Li C, Wen S, Han Q L, Nepal S, Zhang X and Xiang Y. 2021a. Daedalus: breaking nonmaximum suppression in object detection via adversarial examples. IEEE Transactions on Cybernetics, 52(8): 7427-7440 [DOI: 10.1109/TCyb.2020.3041481http://dx.doi.org/10.1109/TCyb.2020.3041481].
Wang D, Yao W, Jiang T, Li C and Chen X. 2023a. Rfla: a stealthy reflected light adversarial attack in the physical world//Proceedings of the IEEE/CVF International Conference on Computer Vision: 4455-4465 [DOI: 10.1109/ICCV51070.2023.00411].
Wang J, Liu A, Yin Z, Liu S, Tang S and Liu X. 2021b. Dual attention suppression attack: generate adversarial camouflage in physical world//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 8565-8574 [DOI: 10.1109/CVPR46437.2021.00846].
Wang J, Liu X, Yin Z, Wang Y, Guo J, H Qin, Wu Q and Liu A. 2024a. Generate transferable adversarial physical camouflages via triplet attention suppression. International Journal of Computer Vision: 1-17 [DOI: 10.1007/s11263-024-02098-4http://dx.doi.org/10.1007/s11263-024-02098-4].
Wang N, Luo Y, Sato T, Xu K and Chen Q A. 2023b. Does physical adversarial example really matter to autonomous driving?towards system-level effect of adversarial object evasion attack//Proceedings of the IEEE/CVF International Conference on Computer Vision: 4412-4423 [DOI: 10.1109/ICCV51070.2023.00407].
Wang X and He K. 2021c. Enhancing the transferability of adversarial attacks through variance tuning//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 1924-1933 [DOI: 10.1109/CVPR46437.2021.00196].
Wang X, Mei S, Lian J and Lu Y. 2024b. Fooling aerial detectors by background attack via dual-adversarial-induced error identification. IEEE Transactions on Geoscience and Remote Sensing, 62: 1-16, noArt. 5618416 [DOI: 10.1109/TGRS.2024.3386533].
Wang Y, Lv H, Kuang X, Zhao G, Tan Y, Zhang Q and Hu J. 2021d. Towards a physical-world adversarial patch for blinding object detection models. Information Sciences, 556: 459-471 [DOI: 10.1016/j.ins.2020.08.087http://dx.doi.org/10.1016/j.ins.2020.08.087].
Wang Z, Bovik A C, Sheikh H R and Simoncelli E P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612 [DOI: 10.1109/TIP.2003.819861http://dx.doi.org/10.1109/TIP.2003.819861].
Wang Z, Zheng S, Song M, Wang Q, Rahimpour A and Qi H. 2019. Advpattern: physical-world attacks on deep person re-identification via adversarially transformable patterns//Proceedings of the IEEE/CVF International Conference on Computer Vision: 1000-1009 [DOI: 10.1109/ICCV.2019.00843].
Wang Z, Wang X, Ma J, Qin Z, Ren J and Ren K. 2023. Survey on adversarial example attack for computer vision systems. Chinese Journal of Computers, 46(2): 436-468.
王志波, 王雪, 马菁菁, 秦湛, 任炬, 任奎. 2023. 面向计算机视觉系统的对抗样本攻击综述. 计算机学报, 46(2): 436-468.[DOI: 10.11897/SP.J.1016.2023.00436http://dx.doi.org/10.11897/SP.J.1016.2023.00436]
Wei X, Guo Y, Yu J and Zhang B. 2022a. Simultaneously optimizing perturbations and positions for black-box adversarial patch attacks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7): 9041-9054 [DOI: 10.1109/TPAMI.2022.3231886http://dx.doi.org/10.1109/TPAMI.2022.3231886].
Wei X, Guo Y and Yu J. 2022b. Adversarial sticker: A stealthy attack method in the physical world. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3): 2711-2725 [DOI: 10.1109/TPAMI.2022.3176760http://dx.doi.org/10.1109/TPAMI.2022.3176760].
Wei X, Huang Y, Sun Y and Yu J. 2024. Unified adversarial patch for visible-infrared cross-modal attacks in the physical world. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(4): 2348-2363 [DOI: 10.1109/TPAMI.2023.3330769http://dx.doi.org/10.1109/TPAMI.2023.3330769].
Wei X, Ruan S, Dong Y and Su H. 2023. Distributional mod-eling for location-aware adversarial patches[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2306.16131.pdfhttps://arxiv.org/pdf/2306.16131.pdf
Wei X, Yu J and Huang Y. 2023. Physically adversarial infrared patches with learnable shapes and locations//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 12334-12342 [DOI: 10.1109/CVPR52729.2023.01187].
Wei X, Yu J and Huang Y. 2023. Infrared adversarial patches with learnable shapes and locations in the physical world. International Journal of Computer Vision: 1-17 [DOI: 10.1007/s11263-023-01963-yhttp://dx.doi.org/10.1007/s11263-023-01963-y].
Wei Z, Feng H, Zhang X and Lian S. 2022. Research on physical adversarial sample detection method based on attention mechanism. Application Research of Computers, 39(1): 254-258.
魏忠诚, 冯浩, 张新秋, 连彬. 2022. 基于注意力机制的物理对抗样本检测方法研究. 计算机应用研究, 39(1): 254-258.[DOI: 10.19734/j.issn.1001-3695.2021.06.0255http://dx.doi.org/10.19734/j.issn.1001-3695.2021.06.0255]
Wen H, Chang S, Zhou L, Liu W and Zhu H. 2024. Opticloak: blinding vision-based autonomous driving systems through adversarial optical projection. IEEE Internet of Things Journal [DOI: 10.1109/JIoT.2024.3405006].
Wiyatno R R and Xu A. 2019. Physical adversarial textures that fool visual object tracking//Proceedings of the IEEE/CVF International Conference on Computer Vision: 4822-4831 [DOI: 10.1109/ICCV.2019.00492].
Wu H, Yang L, Wu H, Xu P and Tian L. 2023. Method and analysis of laser adversarial attack in a blink based on bayesian optimization. Artificial Intelligence Security, 2(2): 38-47.
吴瀚宇, 杨丽蕴, 吴昊, 徐鹏, 田玲. 2023. 基于贝叶斯优化的瞬间激光对抗攻击方法研究. 智能安全, 2(2): 38-47.[DOI: 10.12407/j.issn.2097-2075.2023.02.038http://dx.doi.org/10.12407/j.issn.2097-2075.2023.02.038]
Wu T, Ning X, Li W, Huang R, Yang H, Wang Y. 2020. Physical adversarial attack on vehicle detector in the carla simulator[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2007.16118.pdfhttps://arxiv.org/pdf/2007.16118.pdf
Wu Z, Lim S N, Davis L S, Goldstein T. 2020. Making an invisibility cloak: real world adversarial attacks on object detectors//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer International Publishing: 1-17 [DOI: 10.1007/978-3-030-58548-8_1].
Xiang C, Qi C R and Li B. 2019. Generating 3d adversarial point clouds//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 9136-9144 [DOI: 10.1109/CVPR.2019.00935].
Xiang Y, Han R, Chen Z, Li X and Xu D. 2023. A general defense method for physical space patch adversarial attacks. Journal of Cyber Security, 8(2): 138-148.
翔云, 韩瑞鑫, 陈作辉, 李香玉, 徐东伟. 2023. 一种通用防御物理空间补丁对抗攻击方法. 信息安全学报, 8(2): 138-148.[DOI: 10.19363/J.cnki.cn10-1380/tn.2023.03.11http://dx.doi.org/10.19363/J.cnki.cn10-1380/tn.2023.03.11]
Xiao Z, Gao X, Fu C, Dong Y, Gao W, Zhang X, Zhou J, Zhu J. 2021. Improving transferability of adversarial patches on face recognition with generative models//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 5821-5830 [DOI: 10.1109/CVPR46437.2021.01167].
Xie C, Wang J, Zhang Z, Zhou Y, Xie L and Yuille A. 2017. Adversarial examples for semantic segmentation and object detection//Proceedings of the IEEE/CVF International Conference on Computer Vision: 1369-1378 [DOI: 10.1109/ICCV.2017.153].
Xu K, Zhang G, Liu S, Fan Q, Sun M, Chen H, Chen PY, Wang Y and Lin X. Adversarial t-shirt! evading person detectors in a physical world//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer International Publishing: 665-681 [DOI: 10.1007/978-3-030-58558-7_39].
Xu Y, Wang J, Li Y, Wang Y, Xu Z and Wang D. 2022. Universal physical adversarial attack via background image. International Conference on Applied Cryptography and Network Security. Cham: Springer International Publishing: 3-14 [DOI: 10.1007/978-3-031-16815-4_1http://dx.doi.org/10.1007/978-3-031-16815-4_1].
Yang D Y, Xiong J, li X, Yan X, Raiti J, Wang Y, Wu H, Zhong Z. 2018. Building towards "invisible cloak": robust physical adversarial attack on yolo object detector//Proceedings of the 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference. New York, NY, USA: IEEE: 368-374 [DOI: 10.1109/UEMCON.2018.8796670].
Yang X, Dong Y, Pang T, Xiao Z, Su H, Zhu J. 2022. Controllable evaluation and generation of physical adversarial patch on face recognition[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2203.04623.pdfhttps://arxiv.org/pdf/2203.04623.pdf
Yang X, Liu C, Xu L, Wang Y, Dong Y, Chen N, Su H, Zhu J. 2023. Towards effective adversarial textured 3d meshes on physical face recognition//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 4119-4128 [DOI: 10.1109/CVPR52729.2023.00401].
Yin B, Wang W, Yao T, Guo J, Kong Z, Ding S, Li J, Liu C. 2021. Adv-makeup: a new imperceptible and transferable attack on face recognition//Proceedings of the 30th International Joint Conference on Artificial Intelligence: 5801-5807 [DOI: 10.24963/IJCAI.2021/173].
Zhang Q, Guo Q, Gao R, Juefei-Xu F, Yu H and Feng W. 2024. Adversarial relighting against face recognition. IEEE Transactions on Information Forensics and Security [DOI: 10.1109/TIFS.2024.3380848].
Zhang R, Isola P, Efros A A, Shechtman E and Wang O. 2018. The unreasonable effectiveness of deep features as a perceptual metric//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 586-595 [DOI: 10.1109/CVPR.2018.00068].
Zhang Y, Chen J, Peng Z, Dang Y, Shi Z and Zou Z. 2024. Physical adversarial attacks against aerial object detection with feature-aligned expandable textures. IEEE Transactions on Geoscience and Remote Sensing, 62: 1-15, noArt. 4705915 [DOI: 10.1109/TGRS.2024.3426272].
Zhang Y, Foroosh P D H and Gong B. 2019. Camou: learning a vehicle camouflage for physical adversarial attack on object detections in the wild//Proceedings of the International Conference on Learning Representations: 1-15.
Zhao Y, Zhu H, Liang R, Shen Q, Zhang S and Chen K. 2019. Seeing isn't believing: towards more robust adversarial attack against real world object detectors//Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: Association for Computing Machinery: 1989-2004 [DOI: 10.1145/3319535.3354259].
Zheng J, Lin C, Sun J, Zhao Y and Wang S. 2024. Physical 3d adversarial attacks against monocular depth estimation in autonomous driving//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 24452-24461.
Zheng T, Chen C, Yuan J, Wang Y and Zhang L. 2019. Pointcloud saliency maps//Proceedings of the IEEE/CVF International Conference on Computer Vision: 1598-1606 [DOI: 10.1109/ICCV.2019.00168].
Zhong Y, Liu X, Zhai D, Jiang J and Ji X. 2022. Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 15345-15354 [DOI: 10.1109/CVPR52688.2022.01491].
Zhou B, Khosla A, Lapedriza A, Oliva A and Torralba A. 2016. Learning deep features for discriminative localization//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 2921-2929 [DOI: 10.1109/CVPR.2016.319].
Zhou F, Ling H, Zhang J, Xia Z, Shi Y and Li P. 2023. Facial physical adversarial example performance prediction algorithm based on multi-modal feature fusion. Computer Science, 50(8): 280-285.
周风帆, 凌贺飞, 张锦元, 夏紫薇, 史宇轩, 李平. 2023. 基于多模态特征融合的人脸物理对抗样本性能预测算法.计算机科学, 50(8): 280-285.[DOI: 10.11896/jsjkx.221100124http://dx.doi.org/10.11896/jsjkx.221100124]
Zhou J, Lyu L, He D, Wu Y and Zheng Y. 2024. Rauca: a novel physical adversarial attack on vehicle detectors via robust and accurate camouflage generation[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2402.15853.pdfhttps://arxiv.org/pdf/2402.15853.pdf
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of the IEEE/CVF International Conference on Computer Vision: 2223-2232 [DOI: 10.1109/ICCV.2017.244].
Zhu X, Hu Z, Huang S, Li J and Hu X. 2022. Infrared invisible clothing: hiding from infrared detectors at multiple angles in real world//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 232-241 [DOI: 10.1109/CVPR52688.2022.01296].
Zhu X, Li X, Li J, Wang Z and Hu X. 2021. Fooling thermal infrared pedestrian detectors in real world using small bulbs//Proceedings of the AAAI Conference on Artificial Intelligence, 35(4): 3616-3624 [DOI: 10.1609/AAAI.v35i4.16477].
Zhu X, Liu Y, Hu Z, Li J and Hu X. 2024. Infrared adversarial car stickers//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 24284-24293.
Zhu Y, Chen Y, Li X, Chen K, He Y, Tian X, Zheng B, Chen Y and Huang Q. 2022. Toward understanding and boosting adversarial transferability from a distribution perspective. IEEE Transactions on Image Processing, 31: 6487-6501 [DOI: 10.1109/TIP.2022.3211736http://dx.doi.org/10.1109/TIP.2022.3211736].
Zolfi A, Avidan S, Elovici Y and Shabtai A. 2021. Adversarial mask: real-world adversarial attack against face recognition models[EB/OL].[2024-07-21]. https://arxiv.org/pdf/2111.10759.pdfhttps://arxiv.org/pdf/2111.10759.pdf
Zolfi A, Kravchik M, Elovici Y and Shabtai A. 2021. The translucent patch: a physical and universal attack on object detectors//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition: 5893-5902 [DOI: 10.1109/CVPRS46437.2021.01498].
相关文章
相关作者
相关机构