Non-Linear Trajectory Modeling for Multi-Step Gradient Inversion Attacks in Federated Learning

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, gradient inversion attacks (GIAs) face critical challenges under FedAvg’s multi-step aggregation: attackers observe only aggregated gradients, and existing linear surrogate models (e.g., SME) fail to capture the strong nonlinearity of SGD parameter trajectories. This work proposes NL-SME, a nonlinear surrogate model extension that introduces learnable quadratic Bézier curves to explicitly model SGD trajectory curvature via control points. Coupled with dvec scaling and regularization, NL-SME enhances reconstruction expressivity and stability. Evaluated on CIFAR-100 and FEMNIST, NL-SME significantly outperforms baselines—reducing cosine similarity loss by an order of magnitude—while maintaining computational efficiency. By breaking the restrictive linearity assumption, NL-SME establishes a more accurate and expressive attack paradigm for privacy risk assessment in multi-step federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) preserves privacy by keeping raw data local, yet Gradient Inversion Attacks (GIAs) pose significant threats. In FedAVG multi-step scenarios, attackers observe only aggregated gradients, making data reconstruction challenging. Existing surrogate model methods like SME assume linear parameter trajectories, but we demonstrate this severely underestimates SGD's nonlinear complexity, fundamentally limiting attack effectiveness. We propose Non-Linear Surrogate Model Extension (NL-SME), the first method to introduce nonlinear parametric trajectory modeling for GIAs. Our approach replaces linear interpolation with learnable quadratic Bézier curves that capture SGD's curved characteristics through control points, combined with regularization and dvec scaling mechanisms for enhanced expressiveness. Extensive experiments on CIFAR-100 and FEMNIST datasets show NL-SME significantly outperforms baselines across all metrics, achieving order-of-magnitude improvements in cosine similarity loss while maintaining computational efficiency.This work exposes heightened privacy vulnerabilities in FL's multi-step update paradigm and offers novel perspectives for developing robust defense strategies.
Problem

Research questions and friction points this paper is trying to address.

Modeling nonlinear SGD trajectories to improve gradient inversion attack accuracy
Overcoming limitations of linear surrogate models in federated learning attacks
Enhancing data reconstruction from aggregated gradients in multi-step FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses learnable quadratic Bézier curves for modeling
Introduces regularization and scaling mechanisms
Models nonlinear SGD trajectories in gradient inversion
🔎 Similar Papers
No similar papers found.
Li Xia
Li Xia
Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China
Z
Zheng Liu
Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China
Sili Huang
Sili Huang
JiLin University
Reinforcement learning
W
Wei Tang
Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China
X
Xuan Liu
Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China; Hainan International College of Minzu University of China