π€ AI Summary
For privacy-sensitive next-location prediction (NxLP), this paper proposes FLLL3Mβthe first framework to integrate large language models (LLMs) into the federated learning paradigm for modeling user mobility. FLLL3M employs a lightweight outer-product parameter optimization mechanism, enabling on-device model training without uploading raw location data. It further incorporates distributed gradient aggregation and model compression to significantly reduce communication and memory overhead. Evaluated on four real-world datasets (Gowalla, WeePlace, etc.), FLLL3M achieves state-of-the-art performance: top-1 accuracy improves by up to 12.55 points, and mean reciprocal rank (MRR) reaches 0.1422. Moreover, it reduces model parameters by 45.6% and memory footprint by 52.7%, striking an optimal balance among prediction accuracy, privacy preservation, and resource efficiency.
π Abstract
We propose FLLL3M--Federated Learning with Large Language Models for Mobility Modeling--a privacy-preserving framework for Next-Location Prediction (NxLP). By retaining user data locally and leveraging LLMs through an efficient outer product mechanism, FLLL3M ensures high accuracy with low resource demands. It achieves SOT results on Gowalla (Acc@1: 12.55, MRR: 0.1422), WeePlace (10.71, 0.1285), Brightkite (10.42, 0.1169), and FourSquare (8.71, 0.1023), while reducing parameters by up to 45.6% and memory usage by 52.7%.