🤖 AI Summary
State-of-the-art large multimodal models (LMMs) exhibit fundamental deficiencies in spatial cognition and visual understanding, while mainstream vision benchmarks have become ineffective due to rapid overfitting. Method: We propose the “Impossible Benchmark” paradigm and introduce ZeroBench—a lightweight, human-cognition-grounded visual reasoning benchmark comprising 100 meticulously crafted challenges (334 sub-questions), designed strictly according to principles of human visual reasoning and systematically validated via error attribution analysis and zero-shot evaluation across multiple models. Contribution/Results: ZeroBench achieves a uniform 0.0% accuracy across 20 state-of-the-art LMMs—marking the first benchmark to render current SOTA models entirely incapable of solving any item. It is robust against overfitting, possesses enduring evaluative value, and supports scalable extension. The benchmark is publicly released to advance foundational research in visual intelligence.
📝 Abstract
Large Multimodal Models (LMMs) exhibit major shortfalls when interpreting images and, by some measures, have poorer spatial cognition than small children or animals. Despite this, they attain high scores on many popular visual benchmarks, with headroom rapidly eroded by an ongoing surge of model progress. To address this, there is a pressing need for difficult benchmarks that remain relevant for longer. We take this idea to its limit by introducing ZeroBench-a lightweight visual reasoning benchmark that is entirely impossible for contemporary frontier LMMs. Our benchmark consists of 100 manually curated questions and 334 less difficult subquestions. We evaluate 20 LMMs on ZeroBench, all of which score 0.0%, and rigorously analyse the errors. To encourage progress in visual understanding, we publicly release ZeroBench.