Can We Predict Your Next Move Without Breaking Your Privacy?

πŸ“… 2025-07-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
For privacy-sensitive next-location prediction (NxLP), this paper proposes FLLL3Mβ€”the first framework to integrate large language models (LLMs) into the federated learning paradigm for modeling user mobility. FLLL3M employs a lightweight outer-product parameter optimization mechanism, enabling on-device model training without uploading raw location data. It further incorporates distributed gradient aggregation and model compression to significantly reduce communication and memory overhead. Evaluated on four real-world datasets (Gowalla, WeePlace, etc.), FLLL3M achieves state-of-the-art performance: top-1 accuracy improves by up to 12.55 points, and mean reciprocal rank (MRR) reaches 0.1422. Moreover, it reduces model parameters by 45.6% and memory footprint by 52.7%, striking an optimal balance among prediction accuracy, privacy preservation, and resource efficiency.

Technology Category

Application Category

πŸ“ Abstract
We propose FLLL3M--Federated Learning with Large Language Models for Mobility Modeling--a privacy-preserving framework for Next-Location Prediction (NxLP). By retaining user data locally and leveraging LLMs through an efficient outer product mechanism, FLLL3M ensures high accuracy with low resource demands. It achieves SOT results on Gowalla (Acc@1: 12.55, MRR: 0.1422), WeePlace (10.71, 0.1285), Brightkite (10.42, 0.1169), and FourSquare (8.71, 0.1023), while reducing parameters by up to 45.6% and memory usage by 52.7%.
Problem

Research questions and friction points this paper is trying to address.

Predict next location without compromising user privacy
Use federated learning with LLMs for mobility modeling
Achieve high accuracy with reduced resource demands
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning with Large Language Models
Privacy-preserving Next-Location Prediction
Efficient outer product mechanism reduces resources
πŸ”Ž Similar Papers
No similar papers found.