GGBench: A Geometric Generative Reasoning Benchmark for Unified Multimodal Models

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks typically evaluate discriminative understanding or free-form image generation in isolation, failing to characterize the integrated “comprehension–planning–construction” cognitive process underlying generative reasoning in unified multimodal models. To address this gap, we propose GGBench—the first benchmark designed specifically for evaluating geometric generative reasoning in unified multimodal models. It adopts natural-language-instructed, controllable, and precise 2D graphic construction as a unifying paradigm to systematically assess joint cross-modal understanding and generation capabilities. GGBench features hierarchically structured tasks that integrate semantic parsing, spatial reasoning, and controllable generation, enabling fine-grained diagnostic analysis of the generative process. Through standardized evaluation protocols and reproducible metrics, it fills a critical void in assessing active geometric construction ability, significantly improving the fidelity of evaluating generative cognition. GGBench thus establishes a more rigorous evaluation standard for next-generation embodied intelligent systems.

Technology Category

Application Category

📝 Abstract
The advent of Unified Multimodal Models (UMMs) signals a paradigm shift in artificial intelligence, moving from passive perception to active, cross-modal generation. Despite their unprecedented ability to synthesize information, a critical gap persists in evaluation: existing benchmarks primarily assess discriminative understanding or unconstrained image generation separately, failing to measure the integrated cognitive process of generative reasoning. To bridge this gap, we propose that geometric construction provides an ideal testbed as it inherently demands a fusion of language comprehension and precise visual generation. We introduce GGBench, a benchmark designed specifically to evaluate geometric generative reasoning. It provides a comprehensive framework for systematically diagnosing a model's ability to not only understand and reason but to actively construct a solution, thereby setting a more rigorous standard for the next generation of intelligent systems. Project website: https://opendatalab-raiser.github.io/GGBench/.
Problem

Research questions and friction points this paper is trying to address.

Evaluating unified multimodal models' generative reasoning capabilities
Bridging the gap between discriminative understanding and image generation
Assessing geometric construction requiring language and visual synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes geometric construction as generative reasoning testbed
Introduces benchmark for multimodal geometric reasoning evaluation
Systematically diagnoses language comprehension and visual generation
🔎 Similar Papers
No similar papers found.
J
Jingxuan Wei
University of Chinese Academy of Sciences
C
Caijun Jia
University of Chinese Academy of Sciences
X
Xi Bai
University of Chinese Academy of Sciences
X
Xinglong Xu
University of Chinese Academy of Sciences
S
Siyuan Li
Shanghai Artificial Intelligence Laboratory
Linzhuang Sun
Linzhuang Sun
University of Chinese Academy of Sciences
Multimodal Reasoning
B
Bihui Yu
University of Chinese Academy of Sciences
Conghui He
Conghui He
Shanghai AI Laboratory
Data-centric AILLMDocument Intelligence
Lijun Wu
Lijun Wu
Shanghai AI Laboratory
MLLLMAI4Science
C
Cheng Tan
Shanghai Artificial Intelligence Laboratory