🤖 AI Summary
Current foundation models perform poorly on open-ended multimodal puzzles (e.g., puzzlehunts), achieving only 1–2% final accuracy, as such tasks demand autonomous discovery of implicit problem structure, multi-step iterative reasoning, and creative inference—capabilities akin to scientific discovery or investigative analysis.
Method: We introduce PuzzleWorld, the first multimodal benchmark specifically designed for open-ended reasoning evaluation, comprising 667 puzzles. We propose a fine-grained cognitive skill annotation framework that fully records reasoning trajectories and labels the specific capabilities engaged at each step. Our approach integrates multimodal parsing, stepwise reasoning modeling, and trajectory-based supervised fine-tuning.
Contribution/Results: Experiments show that small models fine-tuned with trajectory supervision improve step-level accuracy from 4% to 11%, validating the critical role of trajectory supervision in open-ended reasoning. PuzzleWorld and its annotation paradigm establish a new framework for diagnostic evaluation, enhanced interpretability, and principled modeling of open-ended reasoning capabilities.
📝 Abstract
Puzzlehunts are a genre of complex, multi-step puzzles lacking well-defined problem definitions. In contrast to conventional reasoning benchmarks consisting of tasks with clear instructions, puzzlehunts require models to discover the underlying problem structure from multimodal evidence and iterative reasoning, mirroring real-world domains such as scientific discovery, exploratory data analysis, or investigative problem-solving. Despite recent progress in foundation models, their performance on such open-ended settings remains largely untested. In this paper, we introduce PuzzleWorld, a large-scale benchmark of 667 puzzlehunt-style problems designed to assess step-by-step, open-ended, and creative multimodal reasoning. Each puzzle is annotated with the final solution, detailed reasoning traces, and cognitive skill labels, enabling holistic benchmarking and fine-grained diagnostic analysis. Most state-of-the-art models achieve only 1-2% final answer accuracy, with the best model solving only 14% of puzzles and reaching 40% stepwise accuracy. To demonstrate the value of our reasoning annotations, we show that fine-tuning a small model on reasoning traces improves stepwise reasoning from 4% to 11%, while training on final answers alone degrades performance to near zero. Our error analysis reveals that current models exhibit myopic reasoning, are bottlenecked by the limitations of language-based inference, and lack sketching capabilities crucial for visual and spatial reasoning. We release PuzzleWorld at https://github.com/MIT-MI/PuzzleWorld to support future work on building more general, open-ended, and creative reasoning systems.