🤖 AI Summary
This work addresses the limitations of existing image editing models in handling multilingual, text-dense, and structurally complex visual documents—particularly Chinese-language materials—and the absence of dedicated evaluation benchmarks. To bridge this gap, we introduce the first high-quality, manually annotated bilingual (Chinese–English) benchmark for visual document editing, encompassing diverse and intricate layouts such as academic papers, posters, and exam sheets. We propose a decoupled evaluation framework grounded in OCR parsing that enables fine-grained assessment of editing fidelity across multiple dimensions, including textual content accuracy, layout preservation, and linguistic consistency. Experimental results demonstrate that our benchmark effectively exposes the shortcomings of state-of-the-art models, with automated evaluation scores showing strong correlation with human judgments.
📝 Abstract
In recent years, multimodal image editing models have achieved substantial progress, enabling users to manipulate visual content through natural language in a flexible and interactive manner. Nevertheless, an important yet insufficiently explored research direction remains visual document image editing, which involves modifying textual content within images while faithfully preserving the original text style and background context. Existing approaches, including AnyText, GlyphControl, and TextCtrl, predominantly focus on English-language scenarios and documents with relatively sparse textual layouts, thereby failing to adequately address dense, structurally complex documents or non-Latin scripts such as Chinese. To bridge this gap, we propose \textbf{V}isual \textbf{D}oc \textbf{E}dit Bench(VDE Bench), a rigorously human-annotated and evaluated benchmark specifically designed to assess image editing models on multilingual and complex visual document editing tasks. The benchmark comprises a high-quality dataset encompassing densely textual documents in both English and Chinese, including academic papers, posters, presentation slides, examination materials, and newspapers. Furthermore, we introduce a decoupled evaluation framework that systematically quantifies editing performance at the OCR parsing level, enabling fine-grained assessment of text modification accuracy. Based on this benchmark, we conduct a comprehensive evaluation of representative state-of-the-art image editing models. Manual verification demonstrates a strong consistency between human judgments and automated evaluation metrics. VDE Bench constitutes the first systematic benchmark for evaluating image editing models on multilingual and densely textual visual documents.