WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback

📅 2024-08-28
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Aligning large language models (LLMs) with real user preferences remains challenging; conventional approaches rely on labor-intensive human annotation or synthetic data, suffering from subjectivity, deviation from authentic user feedback, and amplification of biases. Method: This paper proposes a dynamic preference modeling framework grounded in multi-turn human–LLM interactions. It innovatively detects implicit and explicit user feedback directly from natural dialogues, unifying feedback detection, multi-granularity classification, preference pair construction, and reinforcement learning fine-tuning. Contribution/Results: The resulting dynamic preference dataset requires no manual annotation, effectively avoiding feedback loops and subjective bias. Evaluated across multiple standard benchmarks and a newly introduced checklist-guided evaluation protocol, our method achieves significant improvements in alignment performance—demonstrating substantial advances in scalability, objectivity, and bias mitigation.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) continue to advance, aligning these models with human preferences has emerged as a critical challenge. Traditional alignment methods, relying on human or LLM annotated datasets, are limited by their resource-intensive nature, inherent subjectivity, misalignment with real-world user preferences, and the risk of feedback loops that amplify model biases. To overcome these limitations, we introduce WildFeedback, a novel framework that leverages in-situ user feedback during conversations with LLMs to create preference datasets automatically. Given a corpus of multi-turn user-LLM conversation, WildFeedback identifies and classifies user feedback to LLM responses between conversation turns. The user feedback is then used to create examples of preferred and dispreferred responses according to users' preference. Our experiments demonstrate that LLMs fine-tuned on WildFeedback dataset exhibit significantly improved alignment with user preferences, as evidenced by both traditional benchmarks and our proposed checklist-guided evaluation. By incorporating in-situ feedback from actual users, WildFeedback addresses the scalability, subjectivity, and bias challenges that plague existing approaches, marking a significant step toward developing LLMs that are more responsive to the diverse and evolving needs of their users.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with user preferences
Overcoming limitations of traditional methods
Incorporating in-situ user feedback for improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes in-situ user feedback
Automates preference dataset creation
Improves LLM alignment with users
🔎 Similar Papers
No similar papers found.