基于显著性图的点云替换对抗攻击
Point cloud replacement adversarial attack based on saliency map
- 2022年27卷第2期 页码:500-510
收稿:2021-07-05,
修回:2021-9-14,
录用:2021-9-21,
纸质出版:2022-02-16
DOI: 10.11834/jig.210546
移动端阅览

浏览全部资源
扫码关注微信
收稿:2021-07-05,
修回:2021-9-14,
录用:2021-9-21,
纸质出版:2022-02-16
移动端阅览
目的
2
传统针对对抗攻击的研究通常集中于2维图像领域,而对3维物体进行修改会直接影响该物体的3维特性,生成令人无法察觉的扰动是十分困难的,因此针对3维点云数据的对抗攻击研究并不多。点云对抗样本,如点云物体分类、点云物体分割等的深度神经网络通常容易受到攻击,致使网络做出错误判断。因此,提出一种基于显著性图的点云替换对抗攻击方法。
方法
2
由于现有点云分类网络通常需要获取点云模型中的关键点,该方法通过将点移动到点云中心计算点的显著性值,从而构建点云显著性图,选择具有最高显著性值的采样点集作为关键点集,以确保对网络分类结果造成更大的影响;利用Chamfer距离衡量点云模型之间的差异性,并选择与点云模型库中具有最近Chamfer距离的模型关键点集进行替换,从而实现最小化点云扰动并使得人眼难以察觉。
结果
2
使用ModelNet40数据集,分别在点云分类网络PointNet和PointNet++上进行对比实验。在PointNet网络上,对比FGSM(fast gradient sign method)、I-FGSM(iterative fast gradient sign method)和JSMA(Jacobian-based saliency map attack)方法,本文方法攻击成功率分别提高38.6%、7.3%和41%;若扰动100个采样点,本文方法将使网络准确率下降到6.2%。在PointNet++网络上,对比FGSM和JSMA,本文方法的攻击成功率分别提高58.6%和85.3%;若扰动100个采样点,本文方法将使网络准确率下降到12.8%。
结论
2
本文提出的点云对抗攻击方法,不仅考虑到对抗攻击的效率,而且考虑了对抗样本的不可察觉性,能够高效攻击主流的点云深度神经网络。
Objective
2
Deep learning networks are vulnerable to attacks from well-crafted adversarial samples
resulting in neural networks that produce erroneous results. However
the current research on the adversarial attack is often focused on 2D images and convolutional neural network(CNN) networks. Therefore
research on 3D data such as point cloud is minimal. In recent years
deep learning has achieved great success in the application of 3D data. Considering many safety-critical applications in the field of 3D object classification
such as automatic driving
studying how the adversarial samples of point cloud affect the current 3D deep learning network is very important. Recently
researchers have made great progress on many tasks such as object classification and instance segmentation using deep neural networks on the point cloud. PointNet and PointNet++ are the classical representatives. Robustness against attacks has been studied rigorously in 3D deep learning because security has been becoming a vital role in deep learning systems. Many studies have shown that the deep neural network for processing 2D images is extremely weak against adversarial samples. In addition
most of defense methods have been defeated by adversarial attacks. For instance
fast gradient sign method (FGSM) is a very classical attack algorithm
which successfully enables a neural network to recognize a panda as a gibbon
whereas humans are not able to distinguish the difference between the two pictures before and after the attack. Subsequently
the iterative fast gradient sign method (I-FGSM) algorithm is proposed to improve the FGSM algorithm
making the attack more successful and more difficult to defend
and pointing out the difficulty of the challenge posed by adversarial attacks. An important concept is developed in PointNet. Authors of PointNet indicate that PointNet can correctly classify the network only through a subset of the point clouds
which affect the point cloud classification and are called the critical points. Moreover
the authors point out that the strong robustness of PointNet depends on the existence of the critical points. However
the theory of the critical point is still inadequate. The concept of the critical point is very vague because it does not provide the value of importance of each point and subset at all. Therefore
the point cloud saliency map is proposed to solve this problem well because the point cloud saliency map can estimate the importance of every single point. After the importance of each point is computed
the most important k points can be perturbed to generate countermeasure samples and realize the attack on the network.
Method
2
According to the basic fact of critical points that have been analyzed above
a point cloud saliency map is first built to enhance the effectiveness of attacks. In saliency map construction
iterative estimation of critical points is used to prevent dependencies between different points. After the saliency score of each point is estimated
the algorithm proposed in this paper perturbs the first k points with the highest saliency score. Specifically
k points with the highest saliency score are selected in the input point cloud and exchanged with the critical points which have the smallest chamfer distance. Chamfer distance is often used to measure the direct difference between two point clouds. The smaller the difference between point clouds is
the smaller the chamfer distance is
that is
point clouds with smaller chamfer distance appear more similar. The proposed method does not only limit the search space but also minimizes the disturbance of the point cloud. Therefore
the adversarial sample of the point cloud is not imperceptible to human eyes.
Result
2
The experiment is conducted on the Model-Net40 dataset
which has 40 categories of different objects. PointNet and PointNet++
the most popular point cloud classification models
are used as victim networks. Our method is compared with classical white box attack algorithms. Our attack is also validated with several classic defense algorithms. In the case of using PointNet
compared with FGSM
the attack success rate is increased by 38.6%. Similarly
compared with the Jacobian-based saliency map attack (JSMA)
the attack success rate is increased by 7.3%. Compared with JSMA
the attack success rate is increased by 41%. Under the restriction of perturbation of 100 points
the network accuracy is reduced to 6.2%. When the random point drop algorithm is attacked
a success rate of 97.9% can still be achieved. When the outlier remove algorithm is attacked
a success rate of 98.6% can be achieved. In the case of using PointNet++
compared with FGSM
the attack success rate is increased by 58.6%
and the attack success rate is increased 85.3%. Under the restriction of perturbation of 100 points
the network accuracy is reduced to 12.8%. When the random point drop algorithm is attacked
a success rate of 94.6% can still be achieved. When the outlier remove algorithm is attacked
our method can still achieve a success rate of 95.6%. Experiments on the influence of the different number of perturbation points on the network are also conducted. When 25
50
75
and 100 points are perturbed
the accuracy of the PointNet is decreased to 33.5%
21.7%
16.5%
and 13.5%. Similarly
the accuracy of PointNet++ is decreased to 16.3%
14.7%
13.2%
and 12.8%.
Conclusion
2
The attack algorithm proposed in this paper consider the efficiency of the attack as well as the imperceptibility of the adversarial samples. The proposed method can attack the mainstream point cloud deep neural network efficiently and achieve better performance. Easily succeeding in the attack is still possible even when attacking several simple defense algorithms.
Akhtar N and Mian A. 2018. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access, 6: 14410-14430[DOI: 10.1109/ACCESS.2018.2807385]
Arnab A, Miksik O and Torr P H S. 2018. On the robustness of semantic segmentation models to adversarial attacks//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE: 888-897[ DOI: 10.1109/CVPR.2018.00099 http://dx.doi.org/10.1109/CVPR.2018.00099 ]
Du J and Cai G R. 2021. Point cloud semantic segmentation method based on multi-feature fusion and residual optimization. Journal of Image and Graphics, 26(5): 1105-1116
杜静, 蔡国榕. 2021. 多特征融合与残差优化的点云语义分割方法. 中国图象图形学报, 26(5): 1105-1116
Fan H, Su H and Guibas L J. 2017. A point set generation network for 3d object reconstruction from a single image//Proceedings of 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 605-613[ DOI: 10.1109/CVPR.2017.264 http://dx.doi.org/10.1109/CVPR.2017.264 ]
Goodfellow I J, Shlens J and Szegedy C. 2015. Explaining and harnessing adversarial examples[EB/OL]. [2021-06-15] . https://arxiv.org/pdf/1412.6572v3.pdf https://arxiv.org/pdf/1412.6572v3.pdf
Inkawhich N, Wen W, Li H H and Chen Y R. 2019. Feature space perturbations yield more transferable adversarial examples//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 7059-7067[ DOI: 10.1109/cvpr.2019.00723 http://dx.doi.org/10.1109/cvpr.2019.00723 ]
Kurakin A, Goodfellow I J and Bengio S. 2017. Adversarial examples in the physical world[EB/OL]. [2021-06-15] . https://arxiv.org/pdf/1607.02533.pdf https://arxiv.org/pdf/1607.02533.pdf
Li Y Y, Bu R, Sun M C, Wu W, Di X H and Chen B Q. 2018. PointCNN: convolution on X -transformed points//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: Curran Associates Inc. : 828-838
Liu D, Yu R and Su H. 2019. Extending adversarial attacks and defenses to deep 3D point cloud classifiers//Proceedings of 2019 IEEE International Conference on Image Processing. Taipei, China: IEEE: 2279-2283[ DOI: 10.1109/ICIP.2019.8803770 http://dx.doi.org/10.1109/ICIP.2019.8803770 ]
Meng H Y, Gao L, Lai Y K and Manocha D. 2019. VV-Net: voxel VAE net with group convolutions for point cloud segmentation//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 8499-8507[ DOI: 10.1109/ICCV.2019.00859 http://dx.doi.org/10.1109/ICCV.2019.00859 ]
Miao Y W and Xiao C X. 2014. Geometric Processing and Shape Modeling of 3D Point-Sampled Models. Beijing: Science Press
缪永伟, 肖春霞. 2014. 三维点采样模型的几何处理和形状造型. 北京: 科学出版社
Moosavi-Dezfooli S M, Fawzi A and Frossard P. 2016. DeepFool: a simple and accurate method to fool deep neural networks//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 2574-2582[ DOI: 10.1109/CVPR.2016.282 http://dx.doi.org/10.1109/CVPR.2016.282 ]
Papernot N, McDaniel P, Jha S, Fredrikson M, Celik Z B and Swami A. 2016. The limitations of deep learning in adversarial settings//Proceedings of 2016 IEEE European Symposium on Security and Privacy. Saarbrucken, Germany: IEEE: 372-387[ DOI: 10.1109/EuroSP.2016.36 http://dx.doi.org/10.1109/EuroSP.2016.36 ]
Qi C R, Su H, Mo K and Guibas L J. 2017a. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 652-660[ DOI: 10.1109/CVPR.2017.16 http://dx.doi.org/10.1109/CVPR.2017.16 ]
Qi C R, Li Y, Su H and Guibas L J. 2017b. PointNet++: deep hierarchical feature learning on point sets in a metric space[EB/OL]. [2021-06-15] . https://arxiv.org/pdf/1706.02413.pdf https://arxiv.org/pdf/1706.02413.pdf
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I and Fergus R. 2014. Intriguing properties of neural networks[EB/OL]. [2021-06-15] . https://arxiv.org/pdf/1312.6199.pdf https://arxiv.org/pdf/1312.6199.pdf
Wang Y, Sun Y B, Liu Z W, Sarma S E, Bronstein M M and Solomon J M. 2019. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 38(5): #146[DOI: 10.1145/3326362]
Wu Z R, Song S R, Khosla A, Yu F, Zhang L G, Tang X O and Xiao J X. 2015. 3D ShapeNets: a deep representation for volumetric shapes//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 1912-1920[ DOI: 10.1109/CVPR.2015.7298801 http://dx.doi.org/10.1109/CVPR.2015.7298801 ]
Xiang C, Qi C R and Li B. 2019. Generating 3D adversarial point clouds//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE: 9128-9136[ DOI: 10.1109/CVPR.2019.00935 http://dx.doi.org/10.1109/CVPR.2019.00935 ]
Zhang X L, Fu C L and Zhao Y J. 2020. Extended pointwise convolution network model for point cloud classification and segmentation. Journal of Image and Graphics, 25(8): 1551-1557
张新良, 付陈琳, 赵运基. 2020. 扩展点态卷积网络的点云分类分割模型. 中国图象图形学报, 25(8): 1551-1557[DOI: 10.11834/jig.190508]
Zheng T H, Chen C Y, Yuan J S, Li B and Ren K. 2019. Point cloud saliency maps//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea (South): IEEE: 1598-1606[ DOI: 10.1109/ICCV.2019.00168 http://dx.doi.org/10.1109/ICCV.2019.00168 ]
Zhou M Y, Wu J, Liu Y P, Liu S C and Zhu C. 2020. DaST: data-free substitute training for adversarial attacks//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE: 231-240[ DOI: 10.1109/CVPR42600.2020.00031 http://dx.doi.org/10.1109/CVPR42600.2020.00031 ]
相关作者
相关机构
京公网安备11010802024621