The Era of Real-World Human Interaction: RL from User Conversations

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current dialogue models rely on static human feedback for alignment, limiting continuous optimization and multi-dimensional alignment (e.g., personalization, instruction following, reasoning). This work proposes Reinforcement Learning from Human Interaction (RLHI), a novel paradigm that directly leverages real-world user conversations (e.g., WildChat) for online alignment. RLHI introduces two key components: (1) user rewrites as fine-grained corrective signals, and (2) a persona-conditioned reward model grounded in long-term interaction history, which dynamically captures associations between user profiles and turn-level preferences. Experiments demonstrate that RLHI significantly outperforms strong baselines across personalized response generation, instruction following, and complex reasoning tasks. These results validate the effectiveness, scalability, and generalizability of supervision derived from authentic human–AI interactions.

Technology Category

Application Category

📝 Abstract
We posit that to achieve continual model improvement and multifaceted alignment, future models must learn from natural human interaction. Current conversational models are aligned using pre-annotated, expert-generated human feedback. In this work, we introduce Reinforcement Learning from Human Interaction (RLHI), a paradigm that learns directly from in-the-wild user conversations. We develop two complementary methods: (1) RLHI with User-Guided Rewrites, which revises unsatisfactory model outputs based on users' natural-language follow-up responses, (2) RLHI with User-Based Rewards, which learns via a reward model conditioned on knowledge of the user's long-term interaction history (termed persona). Together, these methods link long-term user personas to turn-level preferences via persona-conditioned preference optimization. Trained on conversations derived from WildChat, both RLHI variants outperform strong baselines in personalization and instruction-following, and similar feedback enhances performance on reasoning benchmarks. These results suggest organic human interaction offers scalable, effective supervision for personalized alignment.
Problem

Research questions and friction points this paper is trying to address.

Learning from natural human interaction for continual model improvement
Developing reinforcement learning methods using wild user conversations
Linking long-term user personas to turn-level preferences for personalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning from Human Interaction paradigm
User-Guided Rewrites based on follow-up responses
Persona-conditioned rewards from long-term interaction history
🔎 Similar Papers
No similar papers found.
C
Chuanyang Jin
FAIR at Meta, Johns Hopkins University
J
Jing Xu
FAIR at Meta
B
Bo Liu
FAIR at Meta
Leitian Tao
Leitian Tao
University of Wisconsin–Madison
machine learning
O
Olga Golovneva
FAIR at Meta
Tianmin Shu
Tianmin Shu
Assistant Professor, JHU
Artificial IntelligenceCognitive Science
W
Wenting Zhao
FAIR at Meta
X
Xian Li
FAIR at Meta
Jason Weston
Jason Weston
Meta
Artificial IntelligenceMachine LearningBioinformaticsVisionNatural Language Processing