ChartEditBench: Evaluating Grounded Multi-Turn Chart Editing in Multimodal Language Models

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited capability of current vision-language models in multi-turn, context-dependent chart editing tasks, despite their strong performance in single-turn chart generation. To this end, we introduce ChartEditBench—a benchmark comprising 5,000 controllably difficult multi-turn chart editing chains—and develop a comprehensive evaluation framework that integrates execution validation, pixel-level similarity, and logical consistency checks. Our study presents the first systematic assessment of multimodal large language models in sustained, context-aware chart editing, revealing significant performance degradation due to context fragmentation and error propagation. Notably, models frequently fail in data manipulation operations, while stylistic adjustments remain relatively robust. By moving beyond traditional single-turn evaluation paradigms, this work establishes a new benchmark for evaluating multi-turn visual interaction in data visualization.

Technology Category

Application Category

📝 Abstract
While Multimodal Large Language Models (MLLMs) perform strongly on single-turn chart generation, their ability to support real-world exploratory data analysis remains underexplored. In practice, users iteratively refine visualizations through multi-turn interactions that require maintaining common ground, tracking prior edits, and adapting to evolving preferences. We introduce ChartEditBench, a benchmark for incremental, visually grounded chart editing via code, comprising 5,000 difficulty-controlled modification chains and a rigorously human-verified subset. Unlike prior one-shot benchmarks, ChartEditBench evaluates sustained, context-aware editing. We further propose a robust evaluation framework that mitigates limitations of LLM-as-a-Judge metrics by integrating execution-based fidelity checks, pixel-level visual similarity, and logical code verification. Experiments with state-of-the-art MLLMs reveal substantial degradation in multi-turn settings due to error accumulation and breakdowns in shared context, with strong performance on stylistic edits but frequent execution failures on data-centric transformations. ChartEditBench, establishes a challenging testbed for grounded, intent-aware multimodal programming.
Problem

Research questions and friction points this paper is trying to address.

multimodal language models
multi-turn chart editing
grounded interaction
exploratory data analysis
visual programming
Innovation

Methods, ideas, or system contributions that make the work stand out.

ChartEditBench
multi-turn editing
grounded multimodal reasoning
execution-based evaluation
visual code verification
🔎 Similar Papers
No similar papers found.