Test-Time Policy Adaptation for Enhanced Multi-Turn Interactions with LLMs

πŸ“… 2025-09-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) suffer from performance degradation over multi-turn interactions due to training solely on static, single-turn data. To address this, we propose a test-time policy adaptation paradigm that leverages implicit reward signals derived from real-time user feedback to dynamically adjust the model’s policy online. We introduce T2PAM, the first end-to-end framework for this purpose, and propose ROSAβ€”a lightweight algorithm that achieves efficient adaptation via subset-parameter fine-tuning and one-step optimal policy approximation, backed by theoretical convergence guarantees. ROSA avoids iterative optimization, substantially reducing computational overhead. Extensive evaluation across multiple challenging multi-turn benchmarks demonstrates significant improvements in task completion rate and interaction efficiency, validating both effectiveness and practicality.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) employ multi-turn interaction as a fundamental paradigm for completing complex tasks. However, their performance often degrades in extended interactions, as they are typically trained on static, single-turn data, which hinders their ability to adapt to real-time user feedback. To address this limitation, we first propose a new paradigm: Test-Time Policy Adaptation for Multi-Turn Interactions (T2PAM), which utilizes user feedback from the ongoing interaction as a reward signal to estimate a latent optimal policy aligned with user preferences, then updates a small subset of parameters to steer the model toward this policy, ultimately enabling efficient in-conversation self-correction. We then introduce Optimum-Referenced One-Step Adaptation (ROSA), a lightweight algorithm that operationalizes T2PAM. ROSA guides the model parameters toward a theoretical optimal policy in a single, efficient update step, avoiding costly iterative gradient-based optimization and minimizing computational overhead. We provide a rigorous theoretical analysis guaranteeing that the policy of ROSA converges to the preference of user as the number of interactions increases. Extensive experiments on challenging benchmark demonstrate that ROSA achieves significant improvements in both task effectiveness and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM performance in multi-turn interactions
Adapting models to real-time user feedback dynamically
Enabling efficient in-conversation self-correction mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time policy adaptation using user feedback
Lightweight single-step algorithm for parameter updates
Convergence to user preferences through interaction
πŸ”Ž Similar Papers
No similar papers found.
Chenxing Wei
Chenxing Wei
Shenzhen University
nlp
H
Hong Wang
University of Science and Technology of China, China
Y
Ying He
College of Computer Science and Software Engineering, Shenzhen University, China
F
Fei Yu
School of Information Technology, Carleton University, Canada
Y
Yao Shu
Hong Kong University of Science and Technology (Guangzhou), China