Home Usenix Security 2022
Post
Cancel

Usenix Security 2022

In the blog, I summary the accepted papers from Usenix Security 2022 which are related to my research interests. Basically, Usenix Security 2022 has three accepted paper lists from summer, fall, and winter:

  • Summer: https://www.usenix.org/conference/usenixsecurity22/summer-accepted-papers
  • Fall: https://www.usenix.org/conference/usenixsecurity22/fall-accepted-papers
  • Winter: https://www.usenix.org/conference/usenixsecurity22/winter-accepted-papers

Interesting Paper list

I will keeping summarizing the interesting papers here.

Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks

Paper Author: Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

Main Idea

The paper points out the impossibility for the defense of adaptive adversarial attacks and provides a traceback method to tell the adversarial examples.

Key insight

Experiments

Dos and Don’ts of Machine Learning in Computer Security

Paper Author: Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallarok, Konrad Rieck

Main Idea

This paper lists ten common issues in the AI security papers with the explanations and potential solutions. It also decorates them by analysizing the impact with examples. The figure shows the ten pitfalls and their corresponding stages in the ML workflow. figure_1

Key insight

Pitfalls list

  • Data collection and Labeling
    • Sampling Bias: The collected data are not representitive enough to show the real data distribution.
    • Label Inaccuracy: The ground truth labels are not accurate, stable.
  • System Design and Learning
    • Data Snooping: The training data are not available in practice.
    • Spurious correlations: Unrelated shortcut patterns for separating classes.
    • Biased Parameters: Some parameters are based on the test set instead of fixing at training time.
  • Performance Evaluation
    • Inappropriate Baselines: Without or with limited baseline methods is in appropriate.
    • Inappropriate Measures: The performance measures do not account for the constraints of the application scenario.
    • Base rate Fallacy: One class imbalance which does not consider when interpreting the performance measures.
  • Deployment and Operation
    • Lab-only Evaluation: The leaning model is evaluated in a lab environment only instead of discussing the practical limitations.
    • Inappropriate Threat Model: The robustness of machine learning models is negelected such as poisoning and evasion attacks.

Impact Analysis

The paper works on four applications:

  • Mobile malware detection
  • Vulnerability discovery
  • source code authorship attribution
  • network intrusion detection

Imgur

This post is licensed under CC BY 4.0 by the author.

Paper Summary 2022

Academic Writing