Home [DSN2023] On Adversarial Robustness of Point Cloud Semantic Segmentation
Post
Cancel

[DSN2023] On Adversarial Robustness of Point Cloud Semantic Segmentation

Title: On Adversarial Robustness of Point Cloud Semantic Segmentation

Author: Jiacen Xu, Zhe Zhou, Boyuan Feng, Yufei Ding and Zhou Li

Conference: The 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Network, June, 2023.

Paper: link

Code: https://github.com/C0ldstudy/PointSecGuard

Bibtex:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@INPROCEEDINGS {10202642,
author = {J. Xu and Z. Zhou and B. Feng and Y. Ding and Z. Li},
booktitle = {2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)},
title = {On Adversarial Robustness of Point Cloud Semantic Segmentation},
year = {2023},
volume = {},
issn = {},
pages = {531-544},
abstract = {Recent research efforts on 3D point cloud semantic segmentation (PCSS) have achieved outstanding performance by adopting neural networks. However, the robustness of these complex models have not been systematically analyzed. Given that PCSS has been applied in many safety-critical applications like autonomous driving, it is important to fill this knowledge gap, especially, how these models are affected under adversarial samples. As such, we present a comparative study of PCSS robustness. First, we formally define the attacker's objective under performance degradation and object hiding. Then, we develop new attack by whether to bound the norm. We evaluate different attack options on two datasets and three PCSS models. We found all the models are vulnerable and attacking point color is more effective. With this study, we call the attention of the research community to develop new approaches to harden PCSS models.},
keywords = {},
doi = {10.1109/DSN58367.2023.00056},
url = {https://doi.ieeecomputersociety.org/10.1109/DSN58367.2023.00056},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {jun}
}

Main Idea

We present a comparative study of PCSS robustness. First, we formally define the attacker’s objective under performance degradation and object hiding. Then, we develop new attack by whether to bound the norm. We evaluate different attack options on three datasets and three PCSS models. We found all the models are vulnerable and attacking point color is more effective. With this study, we call the attention of the research community to develop new approaches to harden PCSS models.

This is an example that the PCSS model in delivery reobots are misled due to the perturbation in the environment. example

The workflow of the attack is shown in the following image. workflow

Key insight

  1. We develop a holistic framework to enable various attack configurations against PCSS models and extend the previous attacks that are coordinate-based to color-based
  2. We evaluate different attack configurations against three types of PCSS models and indoor and outdoor datasets.

Evaluation

  • Models: PointNet++, ResGCN-28, RandLA-Net
  • Datasets: S3DIS, Sementic3D

The comparison between color and coordinate attack:

Color-based attack on different models:

This post is licensed under CC BY 4.0 by the author.

Reinforcement Learning Course from Hugging Face

ChatGPT Prompt Course Note