π€ AI Summary
This study addresses the challenge of sparse and low-quality user feedback during interactions with conversational agents, which hinders effective human-AI collaboration and model improvement. Grounded in Griceβs maxims of conversation, the authors conduct two qualitative user studies to systematically identify four key barriers that impede high-quality feedback. Building on these insights, they propose three actionable design principles and integrate them with a feedback scaffolding mechanism to develop a supportive interaction prototype. Experimental evaluation demonstrates that systems adhering to these design guidelines significantly enhance the quality of user feedback. This work presents the first systematic taxonomy of feedback barriers and introduces a novel paradigm for large language models to proactively elicit more effective user feedback.
π Abstract
High-quality feedback is essential for effective human-AI interaction. It bridges knowledge gaps, corrects digressions, and shapes system behavior; both during interaction and throughout model development. Yet despite its importance, human feedback to AI is often infrequent and low quality. This gap motivates a critical examination of human feedback during interactions with AIs. To understand and overcome the challenges preventing users from giving high-quality feedback, we conducted two studies examining feedback dynamics between humans and conversational agents (CAs). Our formative study, through the lens of Grice's maxims, identified four Feedback Barriers -- Common Ground, Verifiability, Communication, and Informativeness -- that prevent high-quality feedback by users. Building on these findings, we derive three design desiderata and show that systems incorporating scaffolds aligned with these desiderata enabled users to provide higher-quality feedback. Finally, we detail a call for action to the broader AI community for advances in Large Language Models capabilities to overcome Feedback Barriers.