Current Issue Cover
基于显著性图的点云替换对抗攻击

刘复昌1, 南博1, 缪永伟1,2(1.杭州师范大学信息科学与技术学院, 杭州 311121;2.浙江理工大学信息学院, 杭州 310018)

摘 要
目的 传统针对对抗攻击的研究通常集中于2维图像领域,而对3维物体进行修改会直接影响该物体的3维特性,生成令人无法察觉的扰动是十分困难的,因此针对3维点云数据的对抗攻击研究并不多。点云对抗样本,如点云物体分类、点云物体分割等的深度神经网络通常容易受到攻击,致使网络做出错误判断。因此,提出一种基于显著性图的点云替换对抗攻击方法。方法 由于现有点云分类网络通常需要获取点云模型中的关键点,该方法通过将点移动到点云中心计算点的显著性值,从而构建点云显著性图,选择具有最高显著性值的采样点集作为关键点集,以确保对网络分类结果造成更大的影响;利用Chamfer距离衡量点云模型之间的差异性,并选择与点云模型库中具有最近Chamfer距离的模型关键点集进行替换,从而实现最小化点云扰动并使得人眼难以察觉。结果 使用ModelNet40数据集,分别在点云分类网络PointNet和PointNet++上进行对比实验。在PointNet网络上,对比FGSM (fast gradient sign method)、I-FGSM (iterative fast gradient sign method)和JSMA (Jacobian-based saliency map attack)方法,本文方法攻击成功率分别提高38.6%、7.3%和41%;若扰动100个采样点,本文方法将使网络准确率下降到6.2%。在PointNet++网络上,对比FGSM和JSMA,本文方法的攻击成功率分别提高58.6%和85.3%;若扰动100个采样点,本文方法将使网络准确率下降到12.8%。结论 本文提出的点云对抗攻击方法,不仅考虑到对抗攻击的效率,而且考虑了对抗样本的不可察觉性,能够高效攻击主流的点云深度神经网络。
关键词
Point cloud replacement adversarial attack based on saliency map

Liu Fuchang1, Nan Bo1, Miao Yongwei1,2(1.School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China;2.College of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China)

Abstract
Objective Deep learning networks are vulnerable to attacks from well-crafted adversarial samples, resulting in neural networks that produce erroneous results. However, the current research on the adversarial attack is often focused on 2D images and convolutional neural network(CNN) networks. Therefore, research on 3D data such as point cloud is minimal. In recent years, deep learning has achieved great success in the application of 3D data. Considering many safety-critical applications in the field of 3D object classification, such as automatic driving, studying how the adversarial samples of point cloud affect the current 3D deep learning network is very important. Recently, researchers have made great progress on many tasks such as object classification and instance segmentation using deep neural networks on the point cloud. PointNet and PointNet++ are the classical representatives. Robustness against attacks has been studied rigorously in 3D deep learning because security has been becoming a vital role in deep learning systems. Many studies have shown that the deep neural network for processing 2D images is extremely weak against adversarial samples. In addition, most of defense methods have been defeated by adversarial attacks. For instance, fast gradient sign method (FGSM) is a very classical attack algorithm, which successfully enables a neural network to recognize a panda as a gibbon, whereas humans are not able to distinguish the difference between the two pictures before and after the attack. Subsequently, the iterative fast gradient sign method (I-FGSM) algorithm is proposed to improve the FGSM algorithm, making the attack more successful and more difficult to defend, and pointing out the difficulty of the challenge posed by adversarial attacks. An important concept is developed in PointNet. Authors of PointNet indicate that PointNet can correctly classify the network only through a subset of the point clouds, which affect the point cloud classification and are called the critical points. Moreover, the authors point out that the strong robustness of PointNet depends on the existence of the critical points. However, the theory of the critical point is still inadequate. The concept of the critical point is very vague because it does not provide the value of importance of each point and subset at all. Therefore, the point cloud saliency map is proposed to solve this problem well because the point cloud saliency map can estimate the importance of every single point. After the importance of each point is computed, the most important k points can be perturbed to generate countermeasure samples and realize the attack on the network. Method According to the basic fact of critical points that have been analyzed above, a point cloud saliency map is first built to enhance the effectiveness of attacks. In saliency map construction, iterative estimation of critical points is used to prevent dependencies between different points. After the saliency score of each point is estimated, the algorithm proposed in this paper perturbs the first k points with the highest saliency score. Specifically, k points with the highest saliency score are selected in the input point cloud and exchanged with the critical points which have the smallest chamfer distance. Chamfer distance is often used to measure the direct difference between two point clouds. The smaller the difference between point clouds is, the smaller the chamfer distance is, that is, point clouds with smaller chamfer distance appear more similar. The proposed method does not only limit the search space but also minimizes the disturbance of the point cloud. Therefore, the adversarial sample of the point cloud is not imperceptible to human eyes. Result The experiment is conducted on the Model-Net40 dataset, which has 40 categories of different objects. PointNet and PointNet++, the most popular point cloud classification models, are used as victim networks. Our method is compared with classical white box attack algorithms. Our attack is also validated with several classic defense algorithms. In the case of using PointNet, compared with FGSM, the attack success rate is increased by 38.6%. Similarly, compared with the Jacobian-based saliency map attack (JSMA), the attack success rate is increased by 7.3%. Compared with JSMA, the attack success rate is increased by 41%. Under the restriction of perturbation of 100 points, the network accuracy is reduced to 6.2%. When the random point drop algorithm is attacked, a success rate of 97.9% can still be achieved. When the outlier remove algorithm is attacked, a success rate of 98.6% can be achieved. In the case of using PointNet++, compared with FGSM, the attack success rate is increased by 58.6%, and the attack success rate is increased 85.3%. Under the restriction of perturbation of 100 points, the network accuracy is reduced to 12.8%. When the random point drop algorithm is attacked, a success rate of 94.6% can still be achieved. When the outlier remove algorithm is attacked, our method can still achieve a success rate of 95.6%. Experiments on the influence of the different number of perturbation points on the network are also conducted. When 25, 50, 75, and 100 points are perturbed, the accuracy of the PointNet is decreased to 33.5%, 21.7%, 16.5%, and 13.5%. Similarly, the accuracy of PointNet++ is decreased to 16.3%, 14.7%, 13.2%, and 12.8%. Conclusion The attack algorithm proposed in this paper consider the efficiency of the attack as well as the imperceptibility of the adversarial samples. The proposed method can attack the mainstream point cloud deep neural network efficiently and achieve better performance. Easily succeeding in the attack is still possible even when attacking several simple defense algorithms.
Keywords

订阅号|日报