🤖 AI Summary
This study addresses “dialogue friction”—task disruptions arising from misalignment in common ground—within task-oriented dialogues, using Ubuntu IRC technical support conversations as an empirical setting. Such friction stems from divergent participant beliefs and implicit assumptions, leading to workflow breakdowns. Methodologically, we propose an interpretable misalignment analysis framework integrating dialogue state modeling, LLM bias evaluation, human annotation validation, and large-scale IRC log analysis. Results demonstrate that dialogue friction significantly reduces task success rates; moreover, contemporary LLMs exhibit sharp performance degradation in detecting context-dependent and implicit common-ground misalignments. Our core contributions are: (1) a formal definition and operationalization of “dialogue friction,” and (2) the first explainable, quantitative analytical paradigm specifically designed for common-ground misalignment. This work provides both theoretical grounding and empirical evidence to guide the development of robust, socially aware dialogue systems.
📝 Abstract
While it is commonly accepted that maintaining common ground plays a role in conversational success, little prior research exists connecting conversational grounding to success in task-oriented conversations. We study failures of grounding in the Ubuntu IRC dataset, where participants use text-only communication to resolve technical issues. We find that disruptions in conversational flow often stem from a misalignment in common ground, driven by a divergence in beliefs and assumptions held by participants. These disruptions, which we call conversational friction, significantly correlate with task success. We find that although LLMs can identify overt cases of conversational friction, they struggle with subtler and more context-dependent instances requiring pragmatic or domain-specific reasoning.