People use fast, flat goal-directed simulation to reason about novel problems

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how humans rapidly and systematically make metacognitive judgments—such as fairness and enjoyment—in zero-experience scenarios (e.g., first encountering an unfamiliar strategic game). Method: We propose the “Intuitive Player,” a computational cognitive model centered on a “fast-and-flat” probabilistic simulation mechanism: it relies on minimal random sampling and goal-directed heuristics, deliberately avoiding deep search. Inspired by Monte Carlo Tree Search yet substantially lightweighted, the model is validated against behavioral experiments and large-scale human judgment data (N > 1000) across 121 two-player board games. Contribution/Results: The model achieves significantly higher predictive accuracy than computationally intensive expert models for human judgments and decisions spanning zero- to indirect-experience stages. It provides a computationally tractable, empirically testable framework for understanding adaptive intuitive reasoning in strategic domains.

Technology Category

Application Category

📝 Abstract
Games have long been a microcosm for studying planning and reasoning in both natural and artificial intelligence, especially with a focus on expert-level or even super-human play. But real life also pushes human intelligence along a different frontier, requiring people to flexibly navigate decision-making problems that they have never thought about before. Here, we use novice gameplay to study how people make decisions and form judgments in new problem settings. We show that people are systematic and adaptively rational in how they play a game for the first time, or evaluate a game (e.g., how fair or how fun it is likely to be) before they have played it even once. We explain these capacities via a computational cognitive model that we call the "Intuitive Gamer". The model is based on mechanisms of fast and flat (depth-limited) goal-directed probabilistic simulation--analogous to those used in Monte Carlo tree-search models of expert game-play, but scaled down to use very few stochastic samples, simple goal heuristics for evaluating actions, and no deep search. In a series of large-scale behavioral studies with over 1000 participants and 121 two-player strategic board games (almost all novel to our participants), our model quantitatively captures human judgments and decisions varying the amount and kind of experience people have with a game--from no experience at all ("just thinking"), to a single round of play, to indirect experience watching another person and predicting how they should play--and does so significantly better than much more compute-intensive expert-level models. More broadly, our work offers new insights into how people rapidly evaluate, act, and make suggestions when encountering novel problems, and could inform the design of more flexible and human-like AI systems that can determine not just how to solve new tasks, but whether a task is worth thinking about at all.
Problem

Research questions and friction points this paper is trying to address.

Studying how people make decisions in novel problem settings
Modeling human judgment in unfamiliar strategic board games
Explaining rapid evaluation and action in new situations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fast flat goal-directed simulation for reasoning
Depth-limited probabilistic simulation with few samples
Simple goal heuristics without deep search
🔎 Similar Papers
No similar papers found.