Robust Federated Inference

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of federated inference—particularly its vulnerability to malicious attacks causing prediction inaccuracies—this paper formally defines the robust federated inference task and proposes a nonlinear aggregation modeling framework grounded in adversarial machine learning. Methodologically, it introduces a DeepSet-based joint training-inference defense mechanism that integrates adversarial training with test-time robust aggregation, enabling localized model deployment and privacy preservation. A key contribution is the unified security analysis of both linear and nonlinear aggregators, yielding provably robust aggregation optimization. Extensive evaluation on multiple benchmark datasets demonstrates that the proposed method improves accuracy by 4.7–22.2 percentage points over state-of-the-art robust aggregation schemes, significantly enhancing system resilience against adversarial attacks and overall reliability.

Technology Category

Application Category

📝 Abstract
Federated inference, in the form of one-shot federated learning, edge ensembles, or federated ensembles, has emerged as an attractive solution to combine predictions from multiple models. This paradigm enables each model to remain local and proprietary while a central server queries them and aggregates predictions. Yet, the robustness of federated inference has been largely neglected, leaving them vulnerable to even simple attacks. To address this critical gap, we formalize the problem of robust federated inference and provide the first robustness analysis of this class of methods. Our analysis of averaging-based aggregators shows that the error of the aggregator is small either when the dissimilarity between honest responses is small or the margin between the two most probable classes is large. Moving beyond linear averaging, we show that problem of robust federated inference with non-linear aggregators can be cast as an adversarial machine learning problem. We then introduce an advanced technique using the DeepSet aggregation model, proposing a novel composition of adversarial training and test-time robust aggregation to robustify non-linear aggregators. Our composition yields significant improvements, surpassing existing robust aggregation methods by 4.7 - 22.2% in accuracy points across diverse benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Analyzing robustness vulnerabilities in federated inference methods
Formalizing adversarial threats to non-linear aggregation techniques
Developing robust training and aggregation defenses against attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizes robust federated inference with first analysis
Casts non-linear aggregators as adversarial machine learning problem
Introduces adversarial training with robust DeepSet aggregation
🔎 Similar Papers
No similar papers found.