Inferring Communities of Interest in Collaborative Learning-based Recommender Systems

📅 2023-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Collaborative learning recommendation systems—such as federated learning and gossip learning—offer privacy advantages but are recently shown vulnerable to community-level privacy attacks. This paper introduces Community Inference Attack (CIA), a novel threat model wherein an adversary infers user communities sharing common interests over a target item set by merely comparing local model parameters—without training surrogate models. CIA establishes a low-overhead, model-agnostic paradigm for community-level inference, transcending conventional individual-model attack assumptions. Evaluated on three real-world datasets, CIA achieves attack accuracy ten times higher than random guessing. Furthermore, we propose “Share less”, a selective parameter-sharing strategy for gossip learning that significantly outperforms differentially private SGD (DP-SGD) in balancing privacy preservation and recommendation utility.
📝 Abstract
Collaborative-learning-based recommender systems, such as those employing Federated Learning (FL) and Gossip Learning (GL), allow users to train models while keeping their history of liked items on their devices. While these methods were seen as promising for enhancing privacy, recent research has shown that collaborative learning can be vulnerable to various privacy attacks. In this paper, we propose a novel attack called Community Inference Attack (CIA), which enables an adversary to identify community members based on a set of target items. What sets CIA apart is its efficiency: it operates at low computational cost by eliminating the need for training surrogate models. Instead, it uses a comparison-based approach, inferring sensitive information by comparing users' models rather than targeting any specific individual model. To evaluate the effectiveness of CIA, we conduct experiments on three real-world recommendation datasets using two recommendation models under both Federated and Gossip-like settings. The results demonstrate that CIA can be up to 10 times more accurate than random guessing. Additionally, we evaluate two mitigation strategies: Differentially Private Stochastic Gradient Descent (DP-SGD) and a Share less policy, which involves sharing fewer, less sensitive model parameters. Our findings suggest that the Share less strategy offers a better privacy-utility trade-off, especially in GL.
Problem

Research questions and friction points this paper is trying to address.

Proposes Community Inference Attack on collaborative recommender systems
Identifies community members based on target items efficiently
Evaluates mitigation strategies for privacy-utility trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comparison-based approach without surrogate models
Identifies communities via model parameter comparison
Evaluated mitigation strategies for privacy protection
🔎 Similar Papers
No similar papers found.