🤖 AI Summary
Text editing in images faces the challenge of jointly preserving semantic fidelity, geometric coherence, and contextual consistency—capabilities inadequately assessed by existing benchmarks, which lack systematic evaluation of physical plausibility, linguistic meaning, and cross-modal reasoning. To address this, we propose RETEB, the first reasoning-aware benchmark for text editing, targeting three high-level competencies: semantic consistency, physical plausibility, and cross-modal alignment—moving beyond conventional pixel-level metrics. RETEB introduces a novel Semantic Expectation (SE) dimension to quantitatively evaluate multi-step reasoning, context dependency, and layout awareness. Its composite evaluation protocol integrates hierarchical vision-language alignment analysis, physics-informed constraint modeling, and semantic consistency measurement. Extensive experiments reveal that state-of-the-art models perform adequately on simple edits but fail markedly in reasoning-intensive scenarios, exposing critical bottlenecks in multimodal text editing.
📝 Abstract
Text rendering has recently emerged as one of the most challenging frontiers in visual generation, drawing significant attention from large-scale diffusion and multimodal models. However, text editing within images remains largely unexplored, as it requires generating legible characters while preserving semantic, geometric, and contextual coherence. To fill this gap, we introduce TextEditBench, a comprehensive evaluation benchmark that explicitly focuses on text-centric regions in images. Beyond basic pixel manipulations, our benchmark emphasizes reasoning-intensive editing scenarios that require models to understand physical plausibility, linguistic meaning, and cross-modal dependencies. We further propose a novel evaluation dimension, Semantic Expectation (SE), which measures reasoning ability of model to maintain semantic consistency, contextual coherence, and cross-modal alignment during text editing. Extensive experiments on state-of-the-art editing systems reveal that while current models can follow simple textual instructions, they still struggle with context-dependent reasoning, physical consistency, and layout-aware integration. By focusing evaluation on this long-overlooked yet fundamental capability, TextEditBench establishes a new testing ground for advancing text-guided image editing and reasoning in multimodal generation.