Beyond Words and Pixels: A Benchmark for Implicit World Knowledge Reasoning in Generative Models

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-image (T2I) models exhibit significant deficiencies in implicit world knowledge acquisition and multi-physical interaction reasoning. Moreover, prevailing evaluation protocols focus narrowly on compositional alignment or single-turn visual question answering (VQA), lacking systematic assessment of commonsense grounding, causal logic, and auditable evidence. To address this gap, we introduce PicWorld—the first fine-grained benchmark explicitly designed to evaluate implicit knowledge and physical causal reasoning—comprising 1,100 cross-category prompts. We further propose PW-Agent, a novel multi-agent hierarchical evaluation framework integrating visual evidence decomposition, VQA-based verification, and physics-aware realism scoring, augmented with a traceable evidence-chain mechanism. Extensive evaluation across 17 state-of-the-art T2I models reveals pervasive logical inconsistencies and physical implausibility. PicWorld establishes a reproducible, evidence-driven evaluation paradigm and provides empirically grounded pathways for knowledge-enhanced generative model development.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) models today are capable of producing photorealistic, instruction-following images, yet they still frequently fail on prompts that require implicit world knowledge. Existing evaluation protocols either emphasize compositional alignment or rely on single-round VQA-based scoring, leaving critical dimensions such as knowledge grounding, multi-physics interactions, and auditable evidence-substantially undertested. To address these limitations, we introduce PicWorld, the first comprehensive benchmark that assesses the grasp of implicit world knowledge and physical causal reasoning of T2I models. This benchmark consists of 1,100 prompts across three core categories. To facilitate fine-grained evaluation, we propose PW-Agent, an evidence-grounded multi-agent evaluator to hierarchically assess images on their physical realism and logical consistency by decomposing prompts into verifiable visual evidence. We conduct a thorough analysis of 17 mainstream T2I models on PicWorld, illustrating that they universally exhibit a fundamental limitation in their capacity for implicit world knowledge and physical causal reasoning to varying degrees. The findings highlight the need for reasoning-aware, knowledge-integrative architectures in future T2I systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating implicit world knowledge reasoning in text-to-image models
Assessing physical causal reasoning capabilities of generative systems
Developing evidence-grounded evaluation for knowledge integration in AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel benchmark for implicit world knowledge evaluation
Multi-agent hierarchical evaluator for physical realism assessment
Evidence-grounded decomposition of prompts into verifiable components
🔎 Similar Papers
No similar papers found.
Tianyang Han
Tianyang Han
The Hong Kong Polytechnic University (PolyU)
Image generationMultimodal Large Language Model
Junhao Su
Junhao Su
MeiTuan Inc.
Computer Vision
J
Junjie Hu
MeiGen AI Team, Meituan; Fudan University
P
Peizhen Yang
HHMI Janelia Research Campus
H
Hengyu Shi
MeiGen AI Team, Meituan
J
Junfeng Luo
MeiGen AI Team, Meituan
Jialin Gao
Jialin Gao
National University of Singapore
Video Understanding Multi-modal Understanding