🤖 AI Summary
Standard reinforcement learning relies on correlational rewards, rendering it brittle in ecologically realistic, noisy environments where agents struggle to robustly identify the causal effects of their own actions—yet human infants develop such causal agency early in development. To address this, we propose a causally grounded intrinsic reward mechanism that defines the Causal Action Influence Score (CAIS), quantifying the causal effect of an agent’s actions on perceptual outcomes via the 1-Wasserstein distance, and integrates prediction-error-driven surprise signals for noise filtering. This work is the first to systematically incorporate causal inference into intrinsic reward design, successfully reproducing the psychological phenomenon of “extinction burst.” Experiments demonstrate that, under strong external interference where conventional correlational rewards fail completely, our method reliably identifies the agent’s causal efficacy, significantly enhancing robustness and policy acquisition in ecologically valid settings.
📝 Abstract
While human infants robustly discover their own causal efficacy, standard reinforcement learning agents remain brittle, as their reliance on correlation-based rewards fails in noisy, ecologically valid scenarios. To address this, we introduce the Causal Action Influence Score (CAIS), a novel intrinsic reward rooted in causal inference. CAIS quantifies an action's influence by measuring the 1-Wasserstein distance between the learned distribution of sensory outcomes conditional on that action, $p(h|a)$, and the baseline outcome distribution, $p(h)$. This divergence provides a robust reward that isolates the agent's causal impact from confounding environmental noise. We test our approach in a simulated infant-mobile environment where correlation-based perceptual rewards fail completely when the mobile is subjected to external forces. In stark contrast, CAIS enables the agent to filter this noise, identify its influence, and learn the correct policy. Furthermore, the high-quality predictive model learned for CAIS allows our agent, when augmented with a surprise signal, to successfully reproduce the "extinction burst" phenomenon. We conclude that explicitly inferring causality is a crucial mechanism for developing a robust sense of agency, offering a psychologically plausible framework for more adaptive autonomous systems.