Single-Round Scalable Analytic Federated Learning

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces two key challenges: high communication overhead and performance degradation under non-independent and identically distributed (Non-IID) data. While Analytical Federated Learning (AFL) achieves single-round aggregation and distributional robustness, it is restricted to linear models; nonlinear extensions (e.g., DeepAFL) forfeit the single-round advantage. This paper proposes SAFLe, a framework enabling scalable nonlinear modeling under strict single-round communication constraints. Its core innovation lies in a bucketed feature representation coupled with sparse grouped embeddings—mathematically proven to be equivalent to high-dimensional linear regression. This equivalence preserves AFL’s single-aggregation mechanism while endowing it with strong nonlinear expressivity. Experiments across multiple benchmarks demonstrate that SAFLe significantly outperforms linear AFL and multi-round DeepAFL in accuracy, while maintaining minimal communication cost and excellent scalability—establishing a new state-of-the-art for analytical federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is plagued by two key challenges: high communication overhead and performance collapse on heterogeneous (non-IID) data. Analytic FL (AFL) provides a single-round, data distribution invariant solution, but is limited to linear models. Subsequent non-linear approaches, like DeepAFL, regain accuracy but sacrifice the single-round benefit. In this work, we break this trade-off. We propose SAFLe, a framework that achieves scalable non-linear expressivity by introducing a structured head of bucketed features and sparse, grouped embeddings. We prove this non-linear architecture is mathematically equivalent to a high-dimensional linear regression. This key equivalence allows SAFLe to be solved with AFL's single-shot, invariant aggregation law. Empirically, SAFLe establishes a new state-of-the-art for analytic FL, significantly outperforming both linear AFL and multi-round DeepAFL in accuracy across all benchmarks, demonstrating a highly efficient and scalable solution for federated vision.
Problem

Research questions and friction points this paper is trying to address.

Achieves scalable non-linear expressivity in single-round federated learning
Solves communication overhead and performance collapse on heterogeneous data
Proves non-linear architecture equivalence to high-dimensional linear regression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces structured head with bucketed features
Uses sparse grouped embeddings for non-linear expressivity
Mathematically equivalent to high-dimensional linear regression
A
Alan T. L. Bacellar
University of Texas at Austin
Mustafa Munir
Mustafa Munir
The University of Texas at Austin
Machine LearningComputer VisionGenerative AISuperconducting ElectronicsNeurosymbolic AI
Felipe M. G. França
Felipe M. G. França
Professor of Computer Science and Engineering, Federal University of Rio de Janeiro
Artificial IntelligenceComputer ArchitectureDistributed Algorithms
P
Priscila M. V. Lima
Federal University of Rio de Janeiro
R
R. Marculescu
University of Texas at Austin
L
Lizy K. John
University of Texas at Austin