PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM agents predominantly adopt static, generic architectures, limiting their adaptability to user-specific preferences. This paper introduces the first test-time personalization framework for LLM agents. Methodologically, it features: (1) a dynamic persona prompting mechanism that jointly optimizes user representation via simulated interaction and a response-difference-based textual loss; (2) dual-mode memory—episodic and semantic—co-evolving with a customizable action module to establish a closed-loop memory–action adaptation; and (3) real-time alignment of tool invocation and action space with user preferences. Evaluated on a multi-task personalized benchmark, our approach significantly outperforms state-of-the-art baselines, demonstrating both strong customization capability and scalability to real-world scenarios. To our knowledge, this is the first work to systematically validate the feasibility and effectiveness of dynamic, test-time personalization for LLM agents.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) empowered agents have recently emerged as advanced paradigms that exhibit impressive capabilities in a wide range of domains and tasks. Despite their potential, current LLM agents often adopt a one-size-fits-all approach, lacking the flexibility to respond to users' varying needs and preferences. This limitation motivates us to develop PersonaAgent, the first personalized LLM agent framework designed to address versatile personalization tasks. Specifically, PersonaAgent integrates two complementary components - a personalized memory module that includes episodic and semantic memory mechanisms; a personalized action module that enables the agent to perform tool actions tailored to the user. At the core, the persona (defined as unique system prompt for each user) functions as an intermediary: it leverages insights from personalized memory to control agent actions, while the outcomes of these actions in turn refine the memory. Based on the framework, we propose a test-time user-preference alignment strategy that simulate the latest n interactions to optimize the persona prompt, ensuring real-time user preference alignment through textual loss feedback between simulated and ground-truth responses. Experimental evaluations demonstrate that PersonaAgent significantly outperforms other baseline methods by not only personalizing the action space effectively but also scaling during test-time real-world applications. These results underscore the feasibility and potential of our approach in delivering tailored, dynamic user experiences.
Problem

Research questions and friction points this paper is trying to address.

Personalizing LLM agents for diverse user needs
Integrating memory and action modules for customization
Aligning real-time user preferences during test-time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized memory and action modules integration
Test-time user-preference alignment strategy
Dynamic persona prompt optimization via feedback
🔎 Similar Papers
No similar papers found.