🤖 AI Summary
To address the inefficiency of human–LLM interaction stemming from LLMs’ difficulty in accurately interpreting context and user intent, this paper systematically reconstructs and adapts Grice’s Cooperative Principles to the human–LLM interaction setting. Through participatory design workshops, human factors experiments, and theory-driven interaction modeling, we derive nine actionable, end-to-end design guidelines. Our core contribution lies in transcending conventional dialogue-system engineering paradigms by transforming classical pragmatics theory into a collaborative, LLM-oriented interaction framework. The guidelines integrate theoretical rigor with practical implementability, demonstrably improving intent recognition accuracy and contextual coherence. This work establishes a novel, user-centered paradigm and methodological foundation for developing highly collaborative AI interaction systems.
📝 Abstract
While large language models (LLMs) are increasingly used to assist users in various tasks through natural language interactions, these interactions often fall short due to LLMs' limited ability to infer contextual nuances and user intentions, unlike humans. To address this challenge, we draw inspiration from the Gricean Maxims--human communication theory that suggests principles of effective communication--and aim to derive design insights for enhancing human-AI interactions (HAI). Through participatory design workshops with communication experts, designers, and end-users, we identified ways to apply these maxims across the stages of the HAI cycle. Our findings include reinterpreted maxims tailored to human-LLM contexts and nine actionable design considerations categorized by interaction stage. These insights provide a concrete framework for designing more cooperative and user-centered LLM-based systems, bridging theoretical foundations in communication with practical applications in HAI.