🤖 AI Summary
This work addresses the challenge of aligning user intent with natural language database queries when input is ambiguous. To this end, the authors propose a design principle called “pragmatic repair” and develop an incremental clarification system that integrates pragmatic reasoning, interpretable decision-variable modeling, and interactive visualization. By centering on interpretable variables, the system renders its belief updates traceable, enabling users to efficiently explore and correct ambiguous interpretations. User studies demonstrate that this approach significantly improves users’ ability to identify alternative interpretations and resolve query ambiguities, thereby validating the effectiveness of pragmatic repair in enhancing human–AI collaboration and user control.
📝 Abstract
Natural language database interfaces broaden data access, yet they remain brittle under input ambiguity. Standard approaches often collapse uncertainty into a single query, offering little support for mismatches between user intent and system interpretation. We reframe this challenge through pragmatic inference: while users economize expressions, systems operate on priors over the action space that may not align with the users'. In this view, pragmatic repair -- incremental clarification through minimal interaction -- is a natural strategy for resolving underspecification. We present \textsc{PleaSQLarify}, which operationalizes pragmatic repair by structuring interaction around interpretable decision variables that enable efficient clarification. A visual interface complements this by surfacing the action space for exploration, requesting user disambiguation, and making belief updates traceable across turns. In a study with twelve participants, \textsc{PleaSQLarify} helped users recognize alternative interpretations and efficiently resolve ambiguity. Our findings highlight pragmatic repair as a design principle that fosters effective user control in natural language interfaces.