PuzzleWorld: A Benchmark for Multimodal, Open-Ended Reasoning in Puzzlehunts

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current foundation models perform poorly on open-ended multimodal puzzles (e.g., puzzlehunts), achieving only 1–2% final accuracy, as such tasks demand autonomous discovery of implicit problem structure, multi-step iterative reasoning, and creative inference—capabilities akin to scientific discovery or investigative analysis. Method: We introduce PuzzleWorld, the first multimodal benchmark specifically designed for open-ended reasoning evaluation, comprising 667 puzzles. We propose a fine-grained cognitive skill annotation framework that fully records reasoning trajectories and labels the specific capabilities engaged at each step. Our approach integrates multimodal parsing, stepwise reasoning modeling, and trajectory-based supervised fine-tuning. Contribution/Results: Experiments show that small models fine-tuned with trajectory supervision improve step-level accuracy from 4% to 11%, validating the critical role of trajectory supervision in open-ended reasoning. PuzzleWorld and its annotation paradigm establish a new framework for diagnostic evaluation, enhanced interpretability, and principled modeling of open-ended reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Puzzlehunts are a genre of complex, multi-step puzzles lacking well-defined problem definitions. In contrast to conventional reasoning benchmarks consisting of tasks with clear instructions, puzzlehunts require models to discover the underlying problem structure from multimodal evidence and iterative reasoning, mirroring real-world domains such as scientific discovery, exploratory data analysis, or investigative problem-solving. Despite recent progress in foundation models, their performance on such open-ended settings remains largely untested. In this paper, we introduce PuzzleWorld, a large-scale benchmark of 667 puzzlehunt-style problems designed to assess step-by-step, open-ended, and creative multimodal reasoning. Each puzzle is annotated with the final solution, detailed reasoning traces, and cognitive skill labels, enabling holistic benchmarking and fine-grained diagnostic analysis. Most state-of-the-art models achieve only 1-2% final answer accuracy, with the best model solving only 14% of puzzles and reaching 40% stepwise accuracy. To demonstrate the value of our reasoning annotations, we show that fine-tuning a small model on reasoning traces improves stepwise reasoning from 4% to 11%, while training on final answers alone degrades performance to near zero. Our error analysis reveals that current models exhibit myopic reasoning, are bottlenecked by the limitations of language-based inference, and lack sketching capabilities crucial for visual and spatial reasoning. We release PuzzleWorld at https://github.com/MIT-MI/PuzzleWorld to support future work on building more general, open-ended, and creative reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

Assessing open-ended multimodal reasoning in puzzlehunts
Evaluating models' ability to discover problem structure
Benchmarking creative step-by-step reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal benchmark for open-ended reasoning
Fine-tuning improves stepwise reasoning accuracy
Diagnostic analysis reveals model limitations
🔎 Similar Papers
No similar papers found.
H
Hengzhi Li
Massachusetts Institute of Technology, Imperial College London
B
Brendon Jiang
Massachusetts Institute of Technology
A
Alexander Naehu
Massachusetts Institute of Technology
R
Regan Song
Massachusetts Institute of Technology
J
Justin Zhang
Massachusetts Institute of Technology
Megan Tjandrasuwita
Megan Tjandrasuwita
PhD Student, MIT
Multimodal alignmentlarge vision-language modelsneurosymbolic reasoning
Chanakya Ekbote
Chanakya Ekbote
Grad Student, MIT
Graph Representation LearningBayesian InferenceOptimizationGenerative AIAI for Biology
S
Steven-Shine Chen
Massachusetts Institute of Technology, Imperial College London
A
Adithya Balachandran
Massachusetts Institute of Technology
W
Wei Dai
Massachusetts Institute of Technology
R
Rebecca Chang
Massachusetts Institute of Technology
P
Paul Pu Liang
Massachusetts Institute of Technology