🤖 AI Summary
Current text-to-image (T2I) models exhibit significant deficiencies in implicit world knowledge acquisition and multi-physical interaction reasoning. Moreover, prevailing evaluation protocols focus narrowly on compositional alignment or single-turn visual question answering (VQA), lacking systematic assessment of commonsense grounding, causal logic, and auditable evidence. To address this gap, we introduce PicWorld—the first fine-grained benchmark explicitly designed to evaluate implicit knowledge and physical causal reasoning—comprising 1,100 cross-category prompts. We further propose PW-Agent, a novel multi-agent hierarchical evaluation framework integrating visual evidence decomposition, VQA-based verification, and physics-aware realism scoring, augmented with a traceable evidence-chain mechanism. Extensive evaluation across 17 state-of-the-art T2I models reveals pervasive logical inconsistencies and physical implausibility. PicWorld establishes a reproducible, evidence-driven evaluation paradigm and provides empirically grounded pathways for knowledge-enhanced generative model development.
📝 Abstract
Text-to-image (T2I) models today are capable of producing photorealistic, instruction-following images, yet they still frequently fail on prompts that require implicit world knowledge. Existing evaluation protocols either emphasize compositional alignment or rely on single-round VQA-based scoring, leaving critical dimensions such as knowledge grounding, multi-physics interactions, and auditable evidence-substantially undertested. To address these limitations, we introduce PicWorld, the first comprehensive benchmark that assesses the grasp of implicit world knowledge and physical causal reasoning of T2I models. This benchmark consists of 1,100 prompts across three core categories. To facilitate fine-grained evaluation, we propose PW-Agent, an evidence-grounded multi-agent evaluator to hierarchically assess images on their physical realism and logical consistency by decomposing prompts into verifiable visual evidence. We conduct a thorough analysis of 17 mainstream T2I models on PicWorld, illustrating that they universally exhibit a fundamental limitation in their capacity for implicit world knowledge and physical causal reasoning to varying degrees. The findings highlight the need for reasoning-aware, knowledge-integrative architectures in future T2I systems.