CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage risks arising from gradient uploads in federated learning, this paper proposes a gradient protection method based on orthogonal subspace Bayesian perturbation. The method projects gradients onto an orthogonal subspace, applies Bayesian sampling from a cold posterior distribution—a novel integration into subspace perturbation frameworks—and decouples perturbation from model update to enable fine-grained privacy-utility trade-offs in high-dimensional parameter spaces. Experiments across three benchmark datasets demonstrate that the approach robustly resists multiple state-of-the-art gradient inversion attacks: average PSNR of reconstructed images degrades by over 15 dB, while model accuracy drops by less than 1.2%. This achieves a favorable balance between strong privacy guarantees and high model utility.

Technology Category

Application Category

📝 Abstract
Federated learning collaboratively trains a neural network on a global server, where each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data. The process of sending these model updates may leak client's private data information. Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors. Recently, researchers have proposed advanced gradient inversion techniques that existing defenses struggle to handle effectively. In this work, we present a novel defense tailored for large neural network models. Our defense capitalizes on the high dimensionality of the model parameters to perturb gradients within a subspace orthogonal to the original gradient. By leveraging cold posteriors over orthogonal subspaces, our defense implements a refined gradient update mechanism. This enables the selection of an optimal gradient that not only safeguards against gradient inversion attacks but also maintains model utility. We conduct comprehensive experiments across three different datasets and evaluate our defense against various state-of-the-art attacks and defenses. Code is available at https://censor-gradient.github.io.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Information Leakage Prevention
Large Model Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

CENSOR
Privacy Preservation
Federated Learning Defense
🔎 Similar Papers
No similar papers found.