🤖 AI Summary
This paper addresses the problem of autonomous causal directed acyclic graph (DAG) structure learning by an agent under a limited intervention budget, with particular emphasis on robustness to observational and interventional noise. To this end, we propose DODO—the first budget-aware active causal discovery framework—integrating causal inference principles, statistical significance testing, and efficient DAG search heuristics. DODO employs adaptive intervention design and joint observational-interventional reasoning to achieve near-zero-error causal graph recovery even under substantial noise. Empirical evaluation demonstrates that DODO consistently outperforms purely observational methods across diverse resource allocations. In the most challenging settings—characterized by high noise and tight budget constraints—DODO achieves an absolute F1-score improvement of 0.25 over the best existing baseline. These results significantly advance the practicality, scalability, and accuracy of active causal discovery in resource-constrained, real-world environments.
📝 Abstract
Artificial Intelligence has achieved remarkable advancements in recent years, yet much of its progress relies on identifying increasingly complex correlations. Enabling causality awareness in AI has the potential to enhance its performance by enabling a deeper understanding of the underlying mechanisms of the environment. In this paper, we introduce DODO, an algorithm defining how an Agent can autonomously learn the causal structure of its environment through repeated interventions. We assume a scenario where an Agent interacts with a world governed by a causal Directed Acyclic Graph (DAG), which dictates the system's dynamics but remains hidden from the Agent. The Agent's task is to accurately infer the causal DAG, even in the presence of noise. To achieve this, the Agent performs interventions, leveraging causal inference techniques to analyze the statistical significance of observed changes. Results show better performance for DODO, compared to observational approaches, in all but the most limited resource conditions. DODO is often able to reconstruct with as low as zero errors the structure of the causal graph. In the most challenging configuration, DODO outperforms the best baseline by +0.25 F1 points.