🤖 AI Summary
In federated learning, eavesdroppers can infer sensitive information from shared model updates, posing a serious privacy threat.
Method: This paper proposes a controllable offset mechanism that formulates model training within a perturbed parameter estimation framework. It introduces a carefully designed, safely shareable offset vector that drives the server-side Fisher information matrix toward singularity—thereby substantially degrading parameter estimability—while preserving global model convergence. The mechanism further incorporates tamper detection to ensure update integrity.
Results: Theoretical analysis and empirical evaluation demonstrate that, compared to standard noise-injection baselines, the proposed approach achieves stronger model parameter obfuscation with significantly lower communication overhead, effectively balancing privacy preservation, model accuracy, and communication efficiency.
📝 Abstract
In this paper, shifts are introduced to preserve model privacy against an eavesdropper in federated learning. Model learning is treated as a parameter estimation problem. This perspective allows us to derive the Fisher Information matrix of the model updates from the shifted updates and drive them to singularity, thus posing a hard estimation problem for Eve. The shifts are securely shared with the central server to maintain model accuracy at the server and participating devices. A convergence test is proposed to detect if model updates have been tampered with and we show that our scheme passes this test. Numerical results show that our scheme achieves a higher model shift when compared to a noise injection scheme while requiring a lesser bandwidth secret channel.