🤖 AI Summary
Existing benchmarks lack fine-grained evaluation of multimodal abstract and algorithmic reasoning in large language models (LLMs) under vision-language collaboration.
Method: We introduce the first multimodal puzzle benchmark inspired by ARC-AGI, integrating human annotation, controlled prompt engineering, and computational cost analysis.
Contribution/Results: Systematic evaluation of GPT-[n] and o-[n] models reveals: (1) Reasoning capabilities evolve in discrete stages—while o1 surpasses humans on symbolic tasks, its accuracy on multimodal abstract puzzles falls below 40%, and on algorithmic puzzles it approaches random chance; (2) Performance gains incur steep computational costs—o1’s inference cost is 750× that of GPT-4o; (3) The core bottleneck lies in insufficient coupling between cross-modal abstract representation and procedural reasoning. This work establishes a rigorous, cost-aware framework for diagnosing multimodal reasoning limitations in state-of-the-art LLMs.
📝 Abstract
The releases of OpenAI's o1 and o3 mark a significant paradigm shift in Large Language Models towards advanced reasoning capabilities. Notably, o3 outperformed humans in novel problem-solving and skill acquisition on the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). However, this benchmark is limited to symbolic patterns, whereas humans often perceive and reason about multimodal scenarios involving both vision and language data. Thus, there is an urgent need to investigate advanced reasoning capabilities in multimodal tasks. To this end, we track the evolution of the GPT-[n] and o-[n] series models on challenging multimodal puzzles, requiring fine-grained visual perception with abstract or algorithmic reasoning. The superior performance of o1 comes at nearly 750 times the computational cost of GPT-4o, raising concerns about its efficiency. Our results reveal a clear upward trend in reasoning capabilities across model iterations, with notable performance jumps across GPT-series models and subsequently to o1. Nonetheless, we observe that the o1 model still struggles with simple multimodal puzzles requiring abstract reasoning. Furthermore, its performance in algorithmic puzzles remains poor. We plan to continuously track new models in the series and update our results in this paper accordingly. All resources used in this evaluation are openly available https://github.com/declare-lab/LLM-PuzzleTest.