🤖 AI Summary
This work addresses a key challenge in automated planning: enabling AI systems to provide human planners with intelligible and trustworthy explanations through natural dialogue, thereby supporting preference- and expertise-driven guidance. We propose the first conversational, context-aware multi-agent large language model (LLM) framework capable of dynamically generating personalized explanations without relying on predefined templates, while effectively responding to user interactions. By deeply integrating planning systems with natural language interaction—particularly instantiated for scenarios involving goal conflicts—our approach facilitates more intuitive human–AI collaboration. User studies demonstrate that, compared to conventional template-based explanation interfaces, our method significantly enhances users’ understanding of proposed plans and their trust in the system.
📝 Abstract
When automating plan generation for a real-world sequential decision problem, the goal is often not to replace the human planner, but to facilitate an iterative reasoning and elicitation process, where the human's role is to guide the AI planner according to their preferences and expertise. In this context, explanations that respond to users' questions are crucial to improve their understanding of potential solutions and increase their trust in the system. To enable natural interaction with such a system, we present a multi-agent Large Language Model (LLM) architecture that is agnostic to the explanation framework and enables user- and context-dependent interactive explanations. We also describe an instantiation of this framework for goal-conflict explanations, which we use to conduct a user study comparing the LLM-powered interaction with a baseline template-based explanation interface.