Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the cognitive challenge of dynamically retrieving relevant information from vast background knowledge and performing coherent causal reasoning in open-world settings. We propose a “demand-driven probabilistic model synthesis” cognitive architecture that tightly integrates large language models (LLMs) with probabilistic programs (PPs): LLMs perform semantic retrieval to identify globally relevant knowledge, while PPs construct interpretable, composable, task-specific mental models that ensure local logical consistency and causal traceability. A dynamic model synthesis mechanism orchestrates synergistic inference between the two components. Evaluated on the “Model Olympics” reasoning benchmark, our approach significantly outperforms pure-LLM baselines and achieves higher predictive accuracy for human judgments on novel causal scenarios. These results empirically validate the architecture’s psychological plausibility and its value for computational cognitive modeling.

Technology Category

Application Category

📝 Abstract
When faced with novel situations, people are able to marshal relevant considerations from a wide range of background knowledge and put these to use in inferences and predictions. What permits us to draw in globally relevant information and reason over it coherently? Here, we explore the hypothesis that people use a combination of distributed and symbolic representations to construct bespoke mental models tailored to novel situations. We propose a computational implementation of this idea -- a ``Model Synthesis Architecture'' (MSA) -- using language models to implement global relevance-based retrieval and model synthesis and probabilistic programs to implement bespoke, coherent world models. We evaluate our MSA as a model of human judgments on a novel reasoning dataset. The dataset -- built around a `Model Olympics` domain of sports vignettes -- tests models' capacity for human-like, open-ended reasoning by requiring (i) judgments about novel causal structures described in language; (ii) drawing on large bodies of background knowledge; and (iii) doing both in light of observations that introduce arbitrary novel variables. Our MSA approach captures human judgments better than language model-only baselines, under both direct and chain-of-thought generations from the LM that supports model synthesis. These results suggest that MSAs can be implemented in a way that mirrors people's ability to deliver locally coherent reasoning over globally relevant variables, offering a path to understanding and replicating human reasoning in open-ended domains.
Problem

Research questions and friction points this paper is trying to address.

How humans synthesize relevant knowledge for novel situations
Combining distributed and symbolic representations for reasoning
Evaluating model synthesis against human open-ended reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines distributed and symbolic representations for mental models
Uses language models for relevance-based retrieval and synthesis
Employs probabilistic programs for coherent world modeling
🔎 Similar Papers
No similar papers found.