🤖 AI Summary
Deploying large language models (LLMs) in real-time systems faces significant challenges due to high computational overhead, privacy risks, and the difficulty of meeting personalized user needs simultaneously. To address these issues, this work proposes Floe, a framework that enables collaborative inference between lightweight edge models and black-box cloud-based LLMs, ensuring data remains on-device while delivering low-latency, privacy-preserving personalized services. Floe introduces a novel heterogeneous-aware LoRA adaptation strategy and a logit-level fusion mechanism, facilitating efficient cross-hardware deployment. Experimental results demonstrate that Floe substantially improves edge inference speed and model performance, while outperforming existing baselines in both privacy protection and personalization capabilities.
📝 Abstract
Deploying large language models (LLMs) in real-time systems remains challenging due to their substantial computational demands and privacy concerns. We propose Floe, a hybrid federated learning framework designed for latency-sensitive, resource-constrained environments. Floe combines a cloud-based black-box LLM with lightweight small language models (SLMs) on edge devices to enable low-latency, privacy-preserving inference. Personal data and fine-tuning remain on-device, while the cloud LLM contributes general knowledge without exposing proprietary weights. A heterogeneity-aware LoRA adaptation strategy enables efficient edge deployment across diverse hardware, and a logit-level fusion mechanism enables real-time coordination between edge and cloud models. Extensive experiments demonstrate that Floe enhances user privacy and personalization. Moreover, it significantly improves model performance and reduces inference latency on edge devices under real-time constraints compared with baseline approaches.