ModShift: Model Privacy via Designed Shifts

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, eavesdroppers can infer sensitive information from shared model updates, posing a serious privacy threat. Method: This paper proposes a controllable offset mechanism that formulates model training within a perturbed parameter estimation framework. It introduces a carefully designed, safely shareable offset vector that drives the server-side Fisher information matrix toward singularity—thereby substantially degrading parameter estimability—while preserving global model convergence. The mechanism further incorporates tamper detection to ensure update integrity. Results: Theoretical analysis and empirical evaluation demonstrate that, compared to standard noise-injection baselines, the proposed approach achieves stronger model parameter obfuscation with significantly lower communication overhead, effectively balancing privacy preservation, model accuracy, and communication efficiency.

Technology Category

Application Category

📝 Abstract
In this paper, shifts are introduced to preserve model privacy against an eavesdropper in federated learning. Model learning is treated as a parameter estimation problem. This perspective allows us to derive the Fisher Information matrix of the model updates from the shifted updates and drive them to singularity, thus posing a hard estimation problem for Eve. The shifts are securely shared with the central server to maintain model accuracy at the server and participating devices. A convergence test is proposed to detect if model updates have been tampered with and we show that our scheme passes this test. Numerical results show that our scheme achieves a higher model shift when compared to a noise injection scheme while requiring a lesser bandwidth secret channel.
Problem

Research questions and friction points this paper is trying to address.

Preserve model privacy in federated learning
Drive Fisher Information matrix to singularity
Detect tampered model updates via convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shifts preserve model privacy in federated learning
Fisher Information matrix drives updates to singularity
Secure shift sharing maintains model accuracy
🔎 Similar Papers
No similar papers found.