Attribute Inference Attacks for Federated Regression Tasks

📅 2024-11-19
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Attribute inference attacks (AIAs) against regression tasks in federated learning (FL) remain unexplored, as existing AIAs are predominantly designed for classification and ill-suited for reconstructing continuous sensitive attributes in regression settings. This paper introduces the first model-driven AIA framework tailored to FL-based regression. It proposes a gradient- and model-parameter-based inversion mechanism, integrates auxiliary public information, and devises a regression-oriented optimization strategy for sensitive attribute reconstruction—supporting both passive eavesdropping and active perturbation attack modes. Under client-level data heterogeneity, the method significantly improves reconstruction accuracy. Experiments on multiple real-world regression datasets demonstrate a 23.6%–41.2% reduction in reconstruction error over state-of-the-art baselines. This work establishes the first systematic benchmark for quantifying privacy risks in FL regression tasks.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables multiple clients, such as mobile phones and IoT devices, to collaboratively train a global machine learning model while keeping their data localized. However, recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks, such as attribute inference attacks (AIA), where adversaries exploit exchanged messages and auxiliary public information to uncover sensitive attributes of targeted clients. While these attacks have been extensively studied in the context of classification tasks, their impact on regression tasks remains largely unexplored. In this paper, we address this gap by proposing novel model-based AIAs specifically designed for regression tasks in FL environments. Our approach considers scenarios where adversaries can either eavesdrop on exchanged messages or directly interfere with the training process. We benchmark our proposed attacks against state-of-the-art methods using real-world datasets. The results demonstrate a significant increase in reconstruction accuracy, particularly in heterogeneous client datasets, a common scenario in FL. The efficacy of our model-based AIAs makes them better candidates for empirically quantifying privacy leakage for federated regression tasks.
Problem

Research questions and friction points this paper is trying to address.

Study vulnerability of federated regression to attribute inference attacks
Propose novel model-based attacks for federated regression tasks
Evaluate attack efficacy on heterogeneous datasets in FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel model-based AIAs for FL regression
Eavesdrop or interfere training process
Higher accuracy in heterogeneous datasets
🔎 Similar Papers
No similar papers found.