🤖 AI Summary
This work addresses the cognitive mechanisms underlying human efficient planning under resource constraints. Method: We propose that cognitive maps are fundamentally generative, compositional, procedural representations—not static spatial encodings—and formalize them via an integrated framework combining behavioral experiments, Bayesian program induction, large language models (as implicit models of human structured priors), and a resource-bounded probabilistic planning framework. Contribution/Results: Our model is the first to formalize cognitive maps as generative programs that exploit environmental predictability and redundancy, thereby transcending traditional geometric or graph-network paradigms. It enables modular reasoning and structured-prior-guided planning. Empirical evaluation demonstrates that it significantly outperforms classical resource-constrained planning algorithms in both computational and memory efficiency, while more accurately predicting human navigation behavior in structured environments.
📝 Abstract
Making sense of the world and acting in it relies on building simplified mental representations that abstract away aspects of reality. This principle of cognitive mapping is universal to agents with limited resources. Living organisms, people, and algorithms all face the problem of forming functional representations of their world under various computing constraints. In this work, we explore the hypothesis that human resource-efficient planning may arise from representing the world as predictably structured. Building on the metaphor of concepts as programs, we propose that cognitive maps can take the form of generative programs that exploit predictability and redundancy, in contrast to directly encoding spatial layouts. We use a behavioral experiment to show that people who navigate in structured spaces rely on modular planning strategies that align with programmatic map representations. We describe a computational model that predicts human behavior in a variety of structured scenarios. This model infers a small distribution over possible programmatic cognitive maps conditioned on human prior knowledge of the world, and uses this distribution to generate resource-efficient plans. Our models leverages a Large Language Model as an embedding of human priors, implicitly learned through training on a vast corpus of human data. Our model demonstrates improved computational efficiency, requires drastically less memory, and outperforms unstructured planning algorithms with cognitive constraints at predicting human behavior, suggesting that human planning strategies rely on programmatic cognitive maps.