Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates how differential privacy (DP) impairs ordinary users’ capacity to collectively influence AI model training—e.g., via coordinated data poisoning—through algorithmic collective action. It establishes, for the first time, a theoretical lower bound on collective action success probability under DP, exposing a fundamental tension between privacy protection and user collective agency. Using the DPSGD framework, the authors conduct empirical deep neural network training simulations across multiple datasets. Results show that collective action success probability decreases significantly with smaller privacy budget ε and reduced participant scale; the derived lower bound closely matches experimental outcomes in realistic image classification tasks. The key contributions are: (1) a formal characterization of DP’s suppressive effect on algorithmic collective action; (2) the first verifiable, theoretically grounded performance lower bound; and (3) a principled foundation for reconciling strong privacy guarantees with meaningful user empowerment.

Technology Category

Application Category

📝 Abstract
The integration of AI into daily life has generated considerable attention and excitement, while also raising concerns about automating algorithmic harms and re-entrenching existing social inequities. While the responsible deployment of trustworthy AI systems is a worthy goal, there are many possible ways to realize it, from policy and regulation to improved algorithm design and evaluation. In fact, since AI trains on social data, there is even a possibility for everyday users, citizens, or workers to directly steer its behavior through Algorithmic Collective Action, by deliberately modifying the data they share with a platform to drive its learning process in their favor. This paper considers how these grassroots efforts to influence AI interact with methods already used by AI firms and governments to improve model trustworthiness. In particular, we focus on the setting where the AI firm deploys a differentially private model, motivated by the growing regulatory focus on privacy and data protection. We investigate how the use of Differentially Private Stochastic Gradient Descent (DPSGD) affects the collective's ability to influence the learning process. Our findings show that while differential privacy contributes to the protection of individual data, it introduces challenges for effective algorithmic collective action. We characterize lower bounds on the success of algorithmic collective action under differential privacy as a function of the collective's size and the firm's privacy parameters, and verify these trends experimentally by simulating collective action during the training of deep neural network classifiers across several datasets.
Problem

Research questions and friction points this paper is trying to address.

Investigates algorithmic collective action under differential privacy constraints
Analyzes how DPSGD impacts collective influence on AI learning
Characterizes success bounds for collective action based on size and privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Algorithmic Collective Action influences AI behavior
Differentially Private Stochastic Gradient Descent used
Lower bounds characterized for collective success
🔎 Similar Papers
No similar papers found.
R
Rushabh Solanki
University of Waterloo, Vector Institute
M
Meghana Bhange
ÉTS Montréal, Mila
U
Ulrich Aivodji
ÉTS Montréal, Mila
Elliot Creager
Elliot Creager
University of Waterloo
Machine Learning