Interpreting Emergent Planning in Model-Free Reinforcement Learning

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether purely model-free reinforcement learning (RL) agents can spontaneously develop planning capabilities and elucidates the underlying mechanisms. Method: Focusing on the Sokoban task, we propose a concept-driven interpretability framework integrating Test-Input Concept Activation Vectors (TCAV), representation intervention, causal attribution, and reverse engineering of planning algorithms. Contribution/Results: We provide the first mechanistic evidence for implicit planning in model-free RL. Specifically: (1) The DRC agent autonomously constructs an implicit planning logic resembling parallel bidirectional search—without any explicit planning module; (2) its decisions rely on learned conceptual representations to generate internal plans and perform causal reasoning; (3) planning-relevant representations exhibit a causal relationship with test-time computational resources, and increased computation yields significant gains in planning performance. This work challenges the conventional assumption that “model-free implies no planning,” establishing a verifiable and intervenable planning mechanism within model-free RL.

Technology Category

Application Category

📝 Abstract
We present the first mechanistic evidence that model-free reinforcement learning agents can learn to plan. This is achieved by applying a methodology based on concept-based interpretability to a model-free agent in Sokoban -- a commonly used benchmark for studying planning. Specifically, we demonstrate that DRC, a generic model-free agent introduced by Guez et al. (2019), uses learned concept representations to internally formulate plans that both predict the long-term effects of actions on the environment and influence action selection. Our methodology involves: (1) probing for planning-relevant concepts, (2) investigating plan formation within the agent's representations, and (3) verifying that discovered plans (in the agent's representations) have a causal effect on the agent's behavior through interventions. We also show that the emergence of these plans coincides with the emergence of a planning-like property: the ability to benefit from additional test-time compute. Finally, we perform a qualitative analysis of the planning algorithm learned by the agent and discover a strong resemblance to parallelized bidirectional search. Our findings advance understanding of the internal mechanisms underlying planning behavior in agents, which is important given the recent trend of emergent planning and reasoning capabilities in LLMs through RL
Problem

Research questions and friction points this paper is trying to address.

Mechanistic evidence of planning in model-free RL agents
Interpretability methodology for plan formation in Sokoban
Causal effect of learned plans on agent behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept-based interpretability in model-free RL
Probing and verifying learned planning representations
Resemblance to parallelized bidirectional search
🔎 Similar Papers
No similar papers found.
T
Thomas Bush
University of Cambridge
S
Stephen Chung
University of Cambridge
Usman Anwar
Usman Anwar
University of Cambridge
Reinforcement Learning
Adrià Garriga-Alonso
Adrià Garriga-Alonso
Research Scientist, FAR AI
AI safetyinterpretability
D
David Krueger
Mila, University of Montreal