From Curiosity to Competence: How World Models Interact with the Dynamics of Exploration

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the fundamental trade-off between curiosity (information acquisition) and competence (environmental control) in intrinsic motivation for agents operating in open-ended environments. Methodologically, we propose a unified cognitive-computational framework that jointly optimizes representation learning and exploration policies, revealing their bidirectional interaction and co-evolutionary dynamics. We empirically compare two agent architectures—Tabular (with hand-crafted state abstractions) and Dreamer (end-to-end world-model learning)—to analyze how intrinsic reward signals dynamically modulate exploration behavior via learned internal representations. Results demonstrate that simultaneous optimization of curiosity and competence objectives significantly improves exploration efficiency. Notably, Dreamer agents spontaneously exhibit stage-wise co-evolution reminiscent of cognitive development, validating a novel “representation-driven adaptive exploration” paradigm. This work provides both theoretical foundations and scalable implementation pathways for intrinsic motivation modeling and embodied intelligence advancement.

Technology Category

Application Category

📝 Abstract
What drives an agent to explore the world while also maintaining control over the environment? From a child at play to scientists in the lab, intelligent agents must balance curiosity (the drive to seek knowledge) with competence (the drive to master and control the environment). Bridging cognitive theories of intrinsic motivation with reinforcement learning, we ask how evolving internal representations mediate the trade-off between curiosity (novelty or information gain) and competence (empowerment). We compare two model-based agents using handcrafted state abstractions (Tabular) or learning an internal world model (Dreamer). The Tabular agent shows curiosity and competence guide exploration in distinct patterns, while prioritizing both improves exploration. The Dreamer agent reveals a two-way interaction between exploration and representation learning, mirroring the developmental co-evolution of curiosity and competence. Our findings formalize adaptive exploration as a balance between pursuing the unknown and the controllable, offering insights for cognitive theories and efficient reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

Balancing curiosity and competence in intelligent agents
Exploring trade-offs between novelty-seeking and environment control
Studying interaction between exploration and representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balancing curiosity and competence in exploration
Comparing Tabular and Dreamer model-based agents
Two-way interaction between exploration and representation learning
🔎 Similar Papers
No similar papers found.
F
Fryderyk Mantiuk
Human and Machine Cognition Lab, University of Tübingen, Tübingen, Germany
H
Hanqi Zhou
Human and Machine Cognition Lab, University of Tübingen, Tübingen, Germany; Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics
Charley M. Wu
Charley M. Wu
Professor of Computational Cognitive Science, TU Darmstadt
GeneralizationExplorationCompositionalitySocial learningCompression