Test-Time-Matching: Decouple Personality, Memory, and Linguistic Style in LLM-based Role-Playing Language Agent

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient immersion, heavy reliance on training data, and excessive computational resources in large language model (LLM) role-playing, this paper proposes the Test-Time-Matching (TTM) framework. TTM operates entirely at inference time via context engineering—enabling fine-grained, parameter-free role disentanglement and matching without any model fine-tuning. It explicitly decomposes role characteristics into three orthogonal dimensions: *personality*, *memory*, and *linguistic style*, supporting flexible cross-role composition and dynamic substitution. To our knowledge, TTM is the first method to achieve fully automatic, zero-training disentanglement and controllable recombination of these three dimensions, leveraging a three-stage generative pipeline and a context-driven, test-time matching mechanism. Human evaluation demonstrates that TTM significantly outperforms existing zero-shot role-playing approaches in dialogue expressiveness, stylistic consistency, and role fidelity.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has enabled role-playing language agents to demonstrate significant potential in various applications. However, relying solely on prompts and contextual inputs often proves insufficient for achieving deep immersion in specific roles, particularly well-known fictional or public figures. On the other hand, fine-tuning-based approaches face limitations due to the challenges associated with data collection and the computational resources required for training, thereby restricting their broader applicability. To address these issues, we propose Test-Time-Matching (TTM), a training-free role-playing framework through test-time scaling and context engineering. TTM uses LLM agents to automatically decouple a character's features into personality, memory, and linguistic style. Our framework involves a structured, three-stage generation pipeline that utilizes these features for controlled role-playing. It achieves high-fidelity role-playing performance, also enables seamless combinations across diverse linguistic styles and even variations in personality and memory. We evaluate our framework through human assessment, and the results demonstrate that our method achieves the outstanding performance in generating expressive and stylistically consistent character dialogues.
Problem

Research questions and friction points this paper is trying to address.

Decoupling character traits in LLM role-playing agents
Overcoming limitations of prompt-based and fine-tuning methods
Achieving high-fidelity role-play without extensive training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples character features into three aspects
Uses test-time scaling and context engineering
Three-stage pipeline for controlled role-playing
🔎 Similar Papers
No similar papers found.
Xiaoyu Zhan
Xiaoyu Zhan
Nanjing University
Computer Vision
Xinyu Fu
Xinyu Fu
Hong Kong Research Center, Huawei
Large Language ModelsMLLMAgentsHeterogeneous Graphs
H
Hao Sun
Nanjing University
Y
Yuanqi Li
Nanjing University
J
Jie Guo
Nanjing University
Y
Yanwen Guo
Nanjing University