One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling an agent to autonomously construct an executable, symbolic world model from a complex, stochastic environment using only a single, unguided, sparse interaction. To this end, we propose OneLife: a framework that models environmental dynamics via condition-triggered procedural rules (in preconditions–effects form) and organizes them into a dynamic computational graph to support learning under sparse feedback. We introduce novel protocols for state ordering and fidelity evaluation, integrating probabilistic programming with structured symbolic state representations to enable reasoning and planning directly over the pure transition function of the Crafter-OO environment. Experiments demonstrate that OneLife significantly outperforms strong baselines in 16 out of 23 scenarios, generates high-fidelity future state predictions, and executes effective plans—thereby providing the first empirical validation of autonomous, programmatic world model construction from a single exploration episode.

Technology Category

Application Category

📝 Abstract
Symbolic world modeling requires inferring and representing an environment's transitional dynamics as an executable program. Prior work has focused on largely deterministic environments with abundant interaction data, simple mechanics, and human guidance. We address a more realistic and challenging setting, learning in a complex, stochastic environment where the agent has only "one life" to explore a hostile environment without human guidance. We introduce OneLife, a framework that models world dynamics through conditionally-activated programmatic laws within a probabilistic programming framework. Each law operates through a precondition-effect structure, activating in relevant world states. This creates a dynamic computation graph that routes inference and optimization only through relevant laws, avoiding scaling challenges when all laws contribute to predictions about a complex, hierarchical state, and enabling the learning of stochastic dynamics even with sparse rule activation. To evaluate our approach under these demanding constraints, we introduce a new evaluation protocol that measures (a) state ranking, the ability to distinguish plausible future states from implausible ones, and (b) state fidelity, the ability to generate future states that closely resemble reality. We develop and evaluate our framework on Crafter-OO, our reimplementation of the Crafter environment that exposes a structured, object-oriented symbolic state and a pure transition function that operates on that state alone. OneLife can successfully learn key environment dynamics from minimal, unguided interaction, outperforming a strong baseline on 16 out of 23 scenarios tested. We also test OneLife's planning ability, with simulated rollouts successfully identifying superior strategies. Our work establishes a foundation for autonomously constructing programmatic world models of unknown, complex environments.
Problem

Research questions and friction points this paper is trying to address.

Learning symbolic world models from unguided exploration in stochastic environments
Inferring environment dynamics through conditionally-activated programmatic laws
Autonomously constructing programmatic models of complex unknown environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic programming framework models stochastic dynamics
Conditionally-activated programmatic laws enable sparse activation
Dynamic computation graph optimizes inference through relevant laws