π€ AI Summary
Large language models (LLMs) suffer from performance degradation over multi-turn interactions due to training solely on static, single-turn data. To address this, we propose a test-time policy adaptation paradigm that leverages implicit reward signals derived from real-time user feedback to dynamically adjust the modelβs policy online. We introduce T2PAM, the first end-to-end framework for this purpose, and propose ROSAβa lightweight algorithm that achieves efficient adaptation via subset-parameter fine-tuning and one-step optimal policy approximation, backed by theoretical convergence guarantees. ROSA avoids iterative optimization, substantially reducing computational overhead. Extensive evaluation across multiple challenging multi-turn benchmarks demonstrates significant improvements in task completion rate and interaction efficiency, validating both effectiveness and practicality.
π Abstract
Large Language Models (LLMs) employ multi-turn interaction as a fundamental paradigm for completing complex tasks. However, their performance often degrades in extended interactions, as they are typically trained on static, single-turn data, which hinders their ability to adapt to real-time user feedback. To address this limitation, we first propose a new paradigm: Test-Time Policy Adaptation for Multi-Turn Interactions (T2PAM), which utilizes user feedback from the ongoing interaction as a reward signal to estimate a latent optimal policy aligned with user preferences, then updates a small subset of parameters to steer the model toward this policy, ultimately enabling efficient in-conversation self-correction. We then introduce Optimum-Referenced One-Step Adaptation (ROSA), a lightweight algorithm that operationalizes T2PAM. ROSA guides the model parameters toward a theoretical optimal policy in a single, efficient update step, avoiding costly iterative gradient-based optimization and minimizing computational overhead. We provide a rigorous theoretical analysis guaranteeing that the policy of ROSA converges to the preference of user as the number of interactions increases. Extensive experiments on challenging benchmark demonstrate that ROSA achieves significant improvements in both task effectiveness and efficiency.