Home CCS 20222 Summary
Post
Cancel

CCS 20222 Summary

Paper List:

  • 1D: Poisoning and Backdooring ML
    • Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation
    • ❓FenceSitter: Black-box, Content-Agnostic, and Synchronization-Free Enrollment-Phase Attacks on Speaker Recognition Systems
    • Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
    • LoneNeuron: a Highly-effective Feature-domain Neural Trojan using Invisible and Polymorphic Watermarks
  • 2G&3A: Inference Attacks to ML MI attack explanation
    • Group Property Inference Attacks Against Graph Neural Networks
    • Are Attribute Inference Attacks Just Imputation?
    • Enhanced Membership Inference Attacks against Machine Learning Models
    • Membership Inference Attacks and Generalization: A Causal Perspective
  • 3F: ⭕️Federated Learning Security
    • Eluding Secure Aggregation in Federated Learning via Model Inconsistency
    • EIFFeL: Ensuring Integrity for Federated Learning
    • pMPL: A Robust Multi-Party Learning Framework with a Privileged Party
    • Private and Reliable Neural Network Inference
  • 4D: Attacks to ML
    • Physical Hijacking Attacks against Object Trackers
    • Feature Inference Attack on Shapley Values
    • When Evil Calls : Targeted Adversarial Voice over IP-Telephony Network
    • Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models
  • 4I&5B: Adversarial Examples in ML
    • Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception
    • '’Is your explanation stable?’’: A Robustness Evaluation Framework for Feature Attribution
    • Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models
    • Harnessing Perceptual Adversarial Patches for Crowd Counting
  • 5G&6A&6G: Priv & Anon: Privacy Attacks in ML
    • SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
    • Auditing Membership Leakages of Multi-Exit Networks
    • StolenEncoder: Stealing Pre-trained Encoders
    • Membership Inference Attacks by Exploiting Loss Trajectory
    • DPIS: an Enhanced Mechanism for Differentially Private SGD with Importance Sampling
    • NFGen: Automatic Non-linear Function Evaluation Code Generator for General-purpose MPC Platforms
    • On the Privacy Risks of Cell-Based NAS Architectures
    • LPGNet: Link Private Graph Networks for Node Classification
  • 7J&8C: Differential Privacy
    • Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy
    • Frequency Estimation in the Shuffle Model with Almost a Single Message
    • Differentially Private Triangle and 4-Cycle Counting in the Shuffle Model
    • Widespread Underestimation of Sensitivity in Differentially Private Libraries and How to Fix It
  • 8H: Privacy in Graphs
    • Graph Unlearning
    • Finding MNEMON: Reviving Memories of Node Embeddings

Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation

The paper buildan influence estimation to quantify the training data points’ contribution to a model’s prediction. They test on backdoor and poisoning attacks across various data domains like text, vision, and speech.

❇️Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

Author: Reza Shokri (NUS), Nicholas Carlini(Google) The paper provides a poisoning attack to lead other users’ data privacy by membership inference, attribute inference, and data extraction.

LoneNeuron: a Highly-effective Feature-domain Neural Trojan using Invisible and Polymorphic Watermarks

The paper designs a new model-poisoning attack that revise both the model structure and data points to add invisible, sample-specific, and polymorphic pixel-domain watermarks.

❇️Group Property Inference Attacks Against Graph Neural Networks

The paper summarizes the previous methods and design a black-box attacka and a white-box attack for property inference. They use GCN, Graphsage, and GAT as tearget models on three datasets: Pokec, Facebook, and Pubmed. The baselines look pretty simple.

Privacy related?

Are Attribute Inference Attacks Just Imputation?

To understand attribute inference risks, the paper proposes a white-box attack that identifies neurons in a model that are most correlated with the sensitive value for a target attribute.

Enhanced Membership Inference Attacks against Machine Learning Models

Author: Reza Shokri (NUS) The paper proposes a hypothesis testing framework that use reference models to get better performance. They also make explanations and differential analysis about the attacks and what causes data points to be vulnerable. The paper is quite theoritical.

Membership Inference Attacks and Generalization: A Causal Perspective

Author: Prateek Saxena (NUS) The paper designs a causal graph to explain the influence of 6 attack variants.

This post is licensed under CC BY 4.0 by the author.

Academic Writing

Coding Interview