🤖 AI Summary
Current dialogue models rely on static human feedback for alignment, limiting continuous optimization and multi-dimensional alignment (e.g., personalization, instruction following, reasoning). This work proposes Reinforcement Learning from Human Interaction (RLHI), a novel paradigm that directly leverages real-world user conversations (e.g., WildChat) for online alignment. RLHI introduces two key components: (1) user rewrites as fine-grained corrective signals, and (2) a persona-conditioned reward model grounded in long-term interaction history, which dynamically captures associations between user profiles and turn-level preferences. Experiments demonstrate that RLHI significantly outperforms strong baselines across personalized response generation, instruction following, and complex reasoning tasks. These results validate the effectiveness, scalability, and generalizability of supervision derived from authentic human–AI interactions.
📝 Abstract
We posit that to achieve continual model improvement and multifaceted alignment, future models must learn from natural human interaction. Current conversational models are aligned using pre-annotated, expert-generated human feedback. In this work, we introduce Reinforcement Learning from Human Interaction (RLHI), a paradigm that learns directly from in-the-wild user conversations. We develop two complementary methods: (1) RLHI with User-Guided Rewrites, which revises unsatisfactory model outputs based on users' natural-language follow-up responses, (2) RLHI with User-Based Rewards, which learns via a reward model conditioned on knowledge of the user's long-term interaction history (termed persona). Together, these methods link long-term user personas to turn-level preferences via persona-conditioned preference optimization. Trained on conversations derived from WildChat, both RLHI variants outperform strong baselines in personalization and instruction-following, and similar feedback enhances performance on reasoning benchmarks. These results suggest organic human interaction offers scalable, effective supervision for personalized alignment.