Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a multimodal learning framework based on adaptive context fusion to address the limited generalization of existing methods in complex scenarios. By dynamically aligning visual and linguistic features and incorporating a lightweight gating mechanism, the approach enables efficient cross-modal information integration. Experimental results demonstrate that the model significantly outperforms current state-of-the-art methods across multiple benchmark datasets, exhibiting notably enhanced robustness under low-resource and noisy conditions. The primary contribution lies in the design of a scalable fusion architecture that offers a novel perspective for multimodal representation learning while achieving a favorable balance between computational efficiency and performance.

Technology Category

Application Category

📝 Abstract
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice. We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms: while conforming to the iDP guarantees, an individual's privacy risk is not solely governed by their own privacy budget, but critically depends on the privacy choices of all other data contributors. This creates a mismatch between the promise of individual privacy control and the reality of a system where risk is collectively determined. We demonstrate empirically that certain distributions of privacy preferences can unintentionally inflate the privacy risk of individuals, even when their formal guarantees are met. Moreover, this excess risk provides an exploitable attack vector. A central adversary or a set of colluding adversaries can deliberately choose privacy budgets to amplify vulnerabilities of targeted individuals. Most importantly, this attack operates entirely within the guarantees of DP, hiding this excess vulnerability. Our empirical evaluation demonstrates successful attacks against 62% of targeted individuals, substantially increasing their membership inference susceptibility. To mitigate this, we propose $(\varepsilon_i,\delta_i,\overline{\Delta})$-iDP a privacy contract that uses $\Delta$-divergences to provide users with a hard upper bound on their excess vulnerability, while offering flexibility to mechanism design. Our findings expose a fundamental challenge to the current paradigm, demanding a re-evaluation of how iDP systems are designed, audited, communicated, and deployed to make excess risks transparent and controllable.
Problem

Research questions and friction points this paper is trying to address.

Individual Differential Privacy
Collusion Attack
Privacy Risk
Membership Inference
Excess Vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

individual differential privacy
collusion attack
privacy budget
membership inference
Δ-divergence
🔎 Similar Papers
No similar papers found.
J
Johannes Kaiser
Chair for AI in Healthcare and Medicine, Technical University of Munich and TUM University Hospital, Munich, Germany
Alexander Ziller
Alexander Ziller
Technische Universität München
Privacy-preserving Machine LearningAI in HealthComputer Vision
Eleni Triantafillou
Eleni Triantafillou
Google DeepMind
Machine LearningFew-shot LearningMeta-Learning
D
Daniel Ruckert
Chair for AI in Healthcare and Medicine, Technical University of Munich and TUM University Hospital, Munich, Germany; Department of Computing, Imperial College London, UK
G
Georgios Kaissis
Chair for Human-centred Transformative AI, HPI, University of Potsdam, Germany