๐ค AI Summary
To address trust bottlenecks in multi-agent systems (MAS) arising from coordination failures and goal misalignment, this paper proposes AXISโa novel framework integrating counterfactual causal inference with large language model (LLM)-driven active simulation querying. AXIS iteratively issues โwhat-ifโ and โremoveโ counterfactual queries to an environment simulator to generate causally interpretable behavioral attributions. Methodologically, it introduces a tri-dimensional evaluation paradigm unifying subjective preference, correctness, and predictive capability, and employs an external LLM as an automated evaluator. Evaluated across ten autonomous driving scenarios, AXIS improves explanation correctness perception by โฅ7.7%, boosts goal prediction accuracy by 23% on average across four out of five models, and maintains or surpasses baseline performance in action prediction. Overall, AXIS achieves state-of-the-art holistic performance in explainable multi-agent decision-making.
๐ Abstract
Autonomous multi-agent systems (MAS) are useful for automating complex tasks but raise trust concerns due to risks like miscoordination and goal misalignment. Explainability is vital for trust calibration, but explainable reinforcement learning for MAS faces challenges in state/action space complexity, stakeholder needs, and evaluation. Using the counterfactual theory of causation and LLMs' summarisation capabilities, we propose Agentic eXplanations via Interrogative Simulation (AXIS). AXIS generates intelligible causal explanations for pre-trained multi-agent policies by having an LLM interrogate an environment simulator using queries like 'whatif' and 'remove' to observe and synthesise counterfactual information over multiple rounds. We evaluate AXIS on autonomous driving across 10 scenarios for 5 LLMs with a novel evaluation methodology combining subjective preference, correctness, and goal/action prediction metrics, and an external LLM as evaluator. Compared to baselines, AXIS improves perceived explanation correctness by at least 7.7% across all models and goal prediction accuracy by 23% for 4 models, with improved or comparable action prediction accuracy, achieving the highest scores overall.