Federated Attention: A Distributed Paradigm for Collaborative LLM Inference over Edge Networks

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage, high communication overhead, and computational bottlenecks in collaborative large language model (LLM) inference over edge networks, this paper proposes Federated Attention (FedAttn). FedAttn is the first method to establish a structural duality between federated optimization and contextual representation learning, systematically embedding federated learning principles into self-attention computation for multi-device collaborative inference. It jointly optimizes across Transformer layers via distributed key-value (KV) cache exchange and aggregation, sparse attention modeling, and adaptive periodic synchronization. Theoretical analysis characterizes how local computation bias and token heterogeneity affect error propagation. Experiments demonstrate that FedAttn significantly reduces communication volume and edge computational load compared to baseline methods, while preserving output quality—achieving strong scalability and practicality in real-world edge deployments.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are proliferating rapidly at the edge, delivering intelligent capabilities across diverse application scenarios. However, their practical deployment in collaborative scenarios confronts fundamental challenges: privacy vulnerabilities, communication overhead, and computational bottlenecks. To address these, we propose Federated Attention (FedAttn), which integrates the federated paradigm into the self-attention mechanism, creating a new distributed LLM inference framework that simultaneously achieves privacy protection, communication efficiency, and computational efficiency. FedAttn enables participants to perform local self-attention over their own token representations while periodically exchanging and aggregating Key-Value (KV) matrices across multiple Transformer blocks, collaboratively generating LLM responses without exposing private prompts. Further, we identify a structural duality between contextual representation refinement in FedAttn and parameter optimization in FL across private data, local computation, and global aggregation. This key insight provides a principled foundation for systematically porting federated optimization techniques to collaborative LLM inference. Building on this framework, we theoretically analyze how local self-attention computation within participants and heterogeneous token relevance among participants shape error propagation dynamics across Transformer blocks. Moreover, we characterize the fundamental trade-off between response quality and communication/computation efficiency, which is governed by the synchronization interval and the number of participants. Experimental results validate our theoretical analysis, and reveal significant optimization opportunities through sparse attention and adaptive KV aggregation, highlighting FedAttn's potential to deliver scalability and efficiency in real-world edge deployments.
Problem

Research questions and friction points this paper is trying to address.

Achieving privacy protection in collaborative LLM inference at edge networks
Reducing communication overhead during distributed LLM inference processes
Overcoming computational bottlenecks in edge-based collaborative LLM deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates federated paradigm into self-attention mechanism
Enables local self-attention with periodic KV aggregation
Systematically ports federated optimization to LLM inference
🔎 Similar Papers
No similar papers found.
X
Xiumei Deng
Pillar of Information Systems Technology and Design, Singapore University of Technology and Design, Singapore
Zehui Xiong
Zehui Xiong
Professor, Queen's University Belfast
Edge IntelligenceInternet of ThingsWireless NetworkingBlockchainMetaverse
B
Binbin Chen
Pillar of Information Systems Technology and Design, Singapore University of Technology and Design, Singapore
Dong In Kim
Dong In Kim
Sungkyunkwan University (SKKU)
Wireless CommunicationsInternet of ThingsWireless Power TransferConnected Intelligence
M
M. Debbah
KU 6G Research Center, Department of Computer and Information Engineering, Khalifa University, Abu Dhabi, UAE, and also with the CentraleSupelec, University of Paris-Saclay, 91192 Gif-surYvette, France
H
H. V. Poor
Department of Electrical and Computer Engineering, Princeton University, NJ 08544, USA