Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning

📅 2023-08-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing model-based planning and active learning approaches exhibit insufficient synergy in highly uncertain environments. Method: We propose the Sophisticated Learning (SL) framework, the first to integrate counterfactual retrospective inference within the Active Inference paradigm. SL enables agents to endogenously evaluate how alternative policies dynamically revise posterior beliefs about model parameters during planning, thereby balancing goal-directed decision-making with information acquisition. The method combines recursive decision-tree search, Bayesian belief updating, and counterfactual belief evolution modeling, and is validated in biologically inspired open-ended environments. Results: Experiments on tasks featuring dynamic resource constraints and competitive information gains demonstrate that SL significantly outperforms baselines—including Bayes-adaptive RL and UCB—achieving superior coordination between planning and learning while exhibiting enhanced biological plausibility.
📝 Abstract
Active Inference is a recently developed framework for modeling decision processes under uncertainty. Over the last several years, empirical and theoretical work has begun to evaluate the strengths and weaknesses of this approach and how it might be extended and improved. One recent extension is the “sophisticated inference” (SI) algorithm, which improves performance on multi-step planning problems through a recursive decision tree search. However, little work to date has been done to compare SI to other established planning algorithms in reinforcement learning (RL). In addition, SI was developed with a focus on inference as opposed to learning. The present paper therefore has two aims. First, we compare performance of SI to Bayesian RL schemes designed to solve similar problems. Second, we present and compare an extension of SI - sophisticated learning (SL) - that more fully incorporates active learning during planning. SL maintains beliefs about how model parameters would change under the future observations expected under each policy. This allows a form of counterfactual retrospective inference in which the agent considers what could be learned from current or past observations given different future observations. To accomplish these aims, we make use of a novel, biologically inspired environment that requires an optimal balance between goal-seeking and active learning, and which was designed to highlight the problem structure for which SL offers a unique solution. This setup requires an agent to continually search an open environment for available (but changing) resources in the presence of competing affordances for information gain. Our simulations demonstrate that SL outperforms all other algorithms in this context - most notably, Bayes-adaptive RL and upper confidence bound (UCB) algorithms, which aim to solve multi-step planning problems using similar principles (i.e., directed exploration and counterfactual reasoning about belief updates given different possible actions/observations). These results provide added support for the utility of Active Inference in solving this class of biologically-relevant problems and offer added tools for testing hypotheses about human cognition.
Problem

Research questions and friction points this paper is trying to address.

Active parameter learning during model-based planning
Balancing reward harvesting with information gathering
Improving decision-making under radical uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active parameter learning within tree-search framework
Counterfactual reasoning for future observation improvement
Balancing reward harvesting with information gathering
🔎 Similar Papers
No similar papers found.
R
Rowan Hodson
Laureate Institute for Brain Research. Tulsa, OK, USA
B
Bruce A. Bassett
University of Cape Town, South Africa; African Institute for Mathematical Sciences, Muizenberg, Cape Town; South African Astronomical Observatory, Observatory, Cape Town
C
C. V. Hoof
Delft University of Technology, Department of Cognitive Robotics
Benjamin Rosman
Benjamin Rosman
Professor at the University of the Witwatersrand, South Africa
RoboticsArtificial IntelligenceMachine LearningDecision MakingReinforcement Learning
M
M. Solms
University of Cape Town, South Africa
Jonathan Shock
Jonathan Shock
Associate Professor in Mathematics and Applied Mathematics, University of Cape Town
Reinforcement learningString theorycognitive and computational neurosciencemedical data analysismachine learning
Ryan Smith
Ryan Smith
Laureate Institute for Brain Research. Tulsa, OK, USA