TextEditBench: Evaluating Reasoning-aware Text Editing Beyond Rendering

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text editing in images faces the challenge of jointly preserving semantic fidelity, geometric coherence, and contextual consistency—capabilities inadequately assessed by existing benchmarks, which lack systematic evaluation of physical plausibility, linguistic meaning, and cross-modal reasoning. To address this, we propose RETEB, the first reasoning-aware benchmark for text editing, targeting three high-level competencies: semantic consistency, physical plausibility, and cross-modal alignment—moving beyond conventional pixel-level metrics. RETEB introduces a novel Semantic Expectation (SE) dimension to quantitatively evaluate multi-step reasoning, context dependency, and layout awareness. Its composite evaluation protocol integrates hierarchical vision-language alignment analysis, physics-informed constraint modeling, and semantic consistency measurement. Extensive experiments reveal that state-of-the-art models perform adequately on simple edits but fail markedly in reasoning-intensive scenarios, exposing critical bottlenecks in multimodal text editing.

Technology Category

Application Category

📝 Abstract
Text rendering has recently emerged as one of the most challenging frontiers in visual generation, drawing significant attention from large-scale diffusion and multimodal models. However, text editing within images remains largely unexplored, as it requires generating legible characters while preserving semantic, geometric, and contextual coherence. To fill this gap, we introduce TextEditBench, a comprehensive evaluation benchmark that explicitly focuses on text-centric regions in images. Beyond basic pixel manipulations, our benchmark emphasizes reasoning-intensive editing scenarios that require models to understand physical plausibility, linguistic meaning, and cross-modal dependencies. We further propose a novel evaluation dimension, Semantic Expectation (SE), which measures reasoning ability of model to maintain semantic consistency, contextual coherence, and cross-modal alignment during text editing. Extensive experiments on state-of-the-art editing systems reveal that while current models can follow simple textual instructions, they still struggle with context-dependent reasoning, physical consistency, and layout-aware integration. By focusing evaluation on this long-overlooked yet fundamental capability, TextEditBench establishes a new testing ground for advancing text-guided image editing and reasoning in multimodal generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluates text editing in images beyond basic rendering
Focuses on reasoning-intensive scenarios requiring semantic and contextual coherence
Measures model ability to maintain physical plausibility and cross-modal alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for text-centric image editing evaluation
Semantic Expectation dimension measures reasoning consistency
Focuses on context-dependent and physical plausibility challenges
🔎 Similar Papers
No similar papers found.
R
Rui Gui
Central South University
Y
Yang Wan
Central South University
H
Haochen Han
Pengcheng Laboratory
D
Dongxing Mao
Central South University
Fangming Liu
Fangming Liu
Professor, School of Computer Science & Technology, Huazhong University of Science & Technology
AI & Cloud ComputingDatacenterLLM SystemEdge ComputingGreen Computing
M
Min Li
Central South University
A
Alex Jinpeng Wang
Central South University