🤖 AI Summary
Real-world chart understanding is hindered by the scarcity of high-quality training data and the absence of realistic, comprehensive evaluation benchmarks. To address these challenges, we propose EvoChart—a novel self-training framework specifically designed for real-world chart understanding. EvoChart jointly constructs training data and a high-performance model via vision-language model (VLM)-driven synthetic chart generation and multi-source collection of authentic web charts. Concurrently, we introduce EvoChart-QA, the first large-scale, expert-annotated benchmark tailored to real-world chart comprehension, comprising 1,250 QA pairs drawn from 140 diverse websites. Experimental results on EvoChart-QA reveal that the strongest proprietary model, GPT-4o, achieves only 49.8% accuracy, whereas EvoChart boosts the accuracy of open-source VLMs to 54.2%, significantly outperforming baseline methods. These findings empirically validate the efficacy of both the self-training paradigm and domain-specific benchmarking in advancing chart understanding capabilities.
📝 Abstract
Chart understanding enables automated data analysis for humans, which requires models to achieve highly accurate visual comprehension. While existing Visual Language Models (VLMs) have shown progress in chart understanding, the lack of high-quality training data and comprehensive evaluation benchmarks hinders VLM chart comprehension. In this paper, we introduce EvoChart, a novel self-training method for generating synthetic chart data to enhance VLMs' capabilities in real-world chart comprehension. We also propose EvoChart-QA, a noval benchmark for measuring models' chart comprehension abilities in real-world scenarios. Specifically, EvoChart is a unique self-training data synthesis approach that simultaneously produces high-quality training corpus and a high-performance chart understanding model. EvoChart-QA consists of 650 distinct real-world charts collected from 140 different websites and 1,250 expert-curated questions that focus on chart understanding. Experimental results on various open-source and proprietary VLMs tested on EvoChart-QA demonstrate that even the best proprietary model, GPT-4o, achieves only 49.8% accuracy. Moreover, the EvoChart method significantly boosts the performance of open-source VLMs on real-world chart understanding tasks, achieving 54.2% accuracy on EvoChart-QA.