π€ AI Summary
Personalized large language models (PLLMs) face two key challenges: user-level fine-tuning incurs storage overhead that scales linearly with the number of users, and few-shot users struggle to achieve effective personalization. To address these, we propose MTAβa Meta-Task-Aware framework that enables efficient and scalable personalization via meta-personalization decoupling. MTA constructs a shared Meta-LoRA Bank and integrates adaptive LoRA fusion with stacking to dynamically retrieve and compose user-specific low-rank adaptations. By unifying meta-learning, vector retrieval, and two-stage fine-tuning, MTA significantly reduces storage requirements while preserving adaptation fidelity. On the LaMP benchmark, MTA consistently outperforms state-of-the-art methods, delivering substantial gains in few-shot personalization accuracy without compromising scalability or generalization.
π Abstract
Personalized Large Language Models (PLLMs) aim to align model outputs with individual user preferences, a crucial capability for user-centric applications. However, the prevalent approach of fine-tuning a separate module for each user faces two major limitations: (1) storage costs scale linearly with the number of users, rendering the method unscalable; and (2) fine-tuning a static model from scratch often yields suboptimal performance for users with sparse data. To address these challenges, we propose MTA, a Merge-then-Adapt framework for PLLMs. MTA comprises three key stages. First, we construct a shared Meta-LoRA Bank by selecting anchor users and pre-training meta-personalization traits within meta-LoRA modules. Second, to ensure scalability and enable dynamic personalization combination beyond static models, we introduce an Adaptive LoRA Fusion stage. This stage retrieves and dynamically merges the most relevant anchor meta-LoRAs to synthesize a user-specific one, thereby eliminating the need for user-specific storage and supporting more flexible personalization. Third, we propose a LoRA Stacking for Few-Shot Personalization stage, which applies an additional ultra-low-rank, lightweight LoRA module on top of the merged LoRA. Fine-tuning this module enables effective personalization under few-shot settings. Extensive experiments on the LaMP benchmark demonstrate that our approach outperforms existing SOTA methods across multiple tasks.