Home NDSS and Oakland 2021 ML Paper Summary
Post
Cancel

NDSS and Oakland 2021 ML Paper Summary

In this blog, I list some ML-related papers on NDSS and Oakland 2021. I remove some papers that maybe too far away from my research interests.

NDSS 2021

1. Practical Blind Membership Inference Attack via Differential Comparisons

Author: Bo Hui, Yuchen Yang, Haolin Yuan,…,Neil Zhenqiang Gong, Yinzhi Cao(JHU, Duke)

Main Idea

As to the membership inference attack, the original method is to use shadow model to imitate the target model’s behaviour and use a binary classifier to check the query data is the member or the nonmember. But when the shadow model is not similar to the target model, things become different.

BlindMI: By transforming, adding noise and roughly selecting, they build a nonmember dataset. So that the query data are similar to the nonmember dataset, they do not belong to the target dataset. Vice verse.

Key Insight

If we cannot get the target dataset, we use the complement to check out the data point is in the complement or not.

Experiments

They use different kernel functions to check the performance.

2. FARE: Enabling Fine-grained Attack Categorization under Low-quality Labeled Data

Author: Junjie Liang,..Gang Wang, Xinyu Xing(PSU, UIUC)

Main Idea

  1. Use various unsupervised learning methods to cluster the entire dataset:
    1. K-means
    2. DBSCAN
    3. DEC
  2. Contrastive Learning: Use the fused labels to train an input transformation net
  3. Final clustering: perform clustering at the latent space

Key insight

  • Low-quality labels pose a crucial challenge to deploy supervised DNNs in security applications.
  • Contrastive learning with ensemble clustering enables fine-grained attack categorization.
  • FARE can serve as an effective tool for attack categorization in real-world security applications.

Experiments

  1. Datasets:
    1. Android Malware
    2. Network Intrusion(KDDCUP’99)
    3. Real world Application: Fraudulent accounts identification
  2. Metric: AMI and Accuracy

3. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Author: Xiaoyu Cao,…, Neil Gong(Duke University, the Ohio State University)

Main Idea

The paper provides a bootstrapping trust mechanism for the server to assign trust scores for clients and a new aggregation rule to detect adversarial examples on Federated Learning.

Key insight

  1. They design a new Byzantine-robust federated learning method that is robust against poisoning attacks.
  2. The server can enhance security of federated learning via collecting a small training dataset to bootstrap trust.

Experiments

  1. Mnist, 100 clients, 20 malicious

4. Data Poisoning Attacks to Deep Learning Based Recommender Systems

Author: Hai Huang, …, Neil Gong, …, Mingwei Xu(Tsinghua University, Duke University, West Virginia University)

Main Idea

Data Poisoning Attacks:

  • Algorithm-agnostic
  • Algorithm-specific

Attacker’s goal: promote a target item in the recommender systems.

Overview of the Attack:

  1. Approximate the hit ratio
  2. construct the poison model
  3. select filler items

Key insight

The adversarial attack on recommend systems will learn from the classic adv attack while adding some specific features. How to move the adv attack to other security applications will be attractive for the reviewers probably.

Experiments

  1. Datasets: MovieLens-100k, MovieLens-1M, Last.fm
  2. Target RS: NeuMF
  3. Baseline methods: Random attack, bandwagon attack, MF attack.

Oakland 2021

1. Detecting AI Trojans Using Meta Neural Analysis

Author: Xiaojun Xu etc. (UIUC)

Conference: S&P2020

Main Idea

Aiming to design a state-of-the-art trojan models detecting method, the paper trains a meta-classifier whose data are models’ features generated by jumbo learning and query tuning.

Highlight

  1. Promoting jumbo learning and query tuning.
  2. Outperforming other methods on image datasets, speech datasets.
  3. Defensing the adaptive attack. (Make some modification on the original defense).

Key insight

To find the best data to train the meta-classifier for trojan models detection, the paper creates jumbo learning.

Jumbo Learning: They copy the target model structure as shadow models and use different parameters, the clean data-set as well as the trojaned data-set to train them. They also set a function to adjust triggers’ transparency, size and other settings of the trojan data-set to improve the trojaned models’ generality.

Query-Tuning: When training the meta-classifier, they use the query-tuning technique to find the best representative features of the whole models’ data-set for the meta-classifier. The optimization goal is:

\[\arg\max_{\theta, X}\sum^m_{i=1}\mathcal{L}(meta(R(X);\theta), b_i)\]

where \(X=\{x_1, \cdots, x_k\}\) means the query and \(R(X)\) is the representative features of input \(X\), \(b_i\) is the corresponding label. \(meta\) represents the meta-classifier.

Experiments

Compared to Neural Cleanse, DeepInspect, Activation Clustering, Spectral, STRIP, SentiNet, Meta Neural Trojan Detection performs best. They also design adaptive attack and countermeasure to prove the robustness of the method.

This post is licensed under CC BY 4.0 by the author.

CVPR Competition 6 Summary

天池大赛赛题分析 机器学习篇