Integrating Counterfactual Simulations with Language Models for Explaining Multi-Agent Behaviour

๐Ÿ“… 2025-05-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address trust bottlenecks in multi-agent systems (MAS) arising from coordination failures and goal misalignment, this paper proposes AXISโ€”a novel framework integrating counterfactual causal inference with large language model (LLM)-driven active simulation querying. AXIS iteratively issues โ€œwhat-ifโ€ and โ€œremoveโ€ counterfactual queries to an environment simulator to generate causally interpretable behavioral attributions. Methodologically, it introduces a tri-dimensional evaluation paradigm unifying subjective preference, correctness, and predictive capability, and employs an external LLM as an automated evaluator. Evaluated across ten autonomous driving scenarios, AXIS improves explanation correctness perception by โ‰ฅ7.7%, boosts goal prediction accuracy by 23% on average across four out of five models, and maintains or surpasses baseline performance in action prediction. Overall, AXIS achieves state-of-the-art holistic performance in explainable multi-agent decision-making.

Technology Category

Application Category

๐Ÿ“ Abstract
Autonomous multi-agent systems (MAS) are useful for automating complex tasks but raise trust concerns due to risks like miscoordination and goal misalignment. Explainability is vital for trust calibration, but explainable reinforcement learning for MAS faces challenges in state/action space complexity, stakeholder needs, and evaluation. Using the counterfactual theory of causation and LLMs' summarisation capabilities, we propose Agentic eXplanations via Interrogative Simulation (AXIS). AXIS generates intelligible causal explanations for pre-trained multi-agent policies by having an LLM interrogate an environment simulator using queries like 'whatif' and 'remove' to observe and synthesise counterfactual information over multiple rounds. We evaluate AXIS on autonomous driving across 10 scenarios for 5 LLMs with a novel evaluation methodology combining subjective preference, correctness, and goal/action prediction metrics, and an external LLM as evaluator. Compared to baselines, AXIS improves perceived explanation correctness by at least 7.7% across all models and goal prediction accuracy by 23% for 4 models, with improved or comparable action prediction accuracy, achieving the highest scores overall.
Problem

Research questions and friction points this paper is trying to address.

Explaining complex multi-agent system behaviors for trust calibration
Overcoming challenges in explainable reinforcement learning for MAS
Generating intelligible causal explanations using counterfactual simulations and LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates counterfactual simulations with LLMs
Generates causal explanations via interrogative queries
Evaluates with subjective and objective metrics
๐Ÿ”Ž Similar Papers
No similar papers found.