π€ AI Summary
To address the limited personalization capability of large language models (LLMs)βwhere existing approaches either lack dedicated personalization mechanisms or rely on shared user dataβthis paper proposes Fermi, a few-shot personalization method. Fermi uniquely leverages LLM-generated mismatched responses as critical optimization signals, dynamically refining prompts and performing context-aware reasoning by jointly conditioning on user profiles and test query contexts. Its core innovation lies in enabling effective personalization without accessing original training data or sharing sensitive user information; only a small number of historical user feedback instances are required. Extensive evaluations across multiple benchmarks demonstrate that Fermi consistently outperforms state-of-the-art baselines, achieving significant improvements in both personalized response accuracy and user-consistency metrics.
π Abstract
As the diversity of users increases, the capability of providing personalized responses by large language models (LLMs) has become increasingly important. Existing approaches have only limited successes in LLM personalization, due to the absence of personalized learning or the reliance on shared personal data. This paper proposes a new approach for a few-shot personalization of LLMs with their mis-aligned responses (Fermi). Our key idea is to learn a set of personalized prompts for each user by progressively improving the prompts using LLMs, based on user profile (e.g., demographic information) and a few examples of previous opinions. During an iterative process of prompt improvement, we incorporate the contexts of mis-aligned responses by LLMs, which are especially crucial for the effective personalization of LLMs. In addition, we develop an effective inference method to further leverage the context of the test query and the personalized prompts. Our experimental results demonstrate that Fermi significantly improves performance across various benchmarks, compared to best-performing baselines.