🤖 AI Summary
Existing vision-language models struggle to jointly and effectively comprehend the geometric structures, numerical data, and natural language associated with charts. To address this limitation, this work proposes a code-guided synthetic approach to construct the largest open-source multimodal chart understanding dataset to date, comprising 1.5 million high-quality samples spanning 24 chart types and six plotting libraries. The dataset features fine-grained quintuple alignment among chart images, generation code, underlying data tables, natural language summaries, and question-answer pairs. Rigorous quality control ensures reliability, including expert-annotated, real-world, safety-focused, and localization-specific subsets. Fine-tuned vision-language models leveraging this dataset demonstrate substantial performance gains across multiple benchmarks, validating its efficacy as a strong supervisory signal and advancing the development of robust, generalizable foundation models for visual understanding.
📝 Abstract
Understanding charts requires models to jointly reason over geometric visual patterns, structured numerical data, and natural language -- a capability where current vision-language models (VLMs) remain limited. We introduce ChartNet, a high-quality, million-scale multimodal dataset designed to advance chart interpretation and reasoning. ChartNet leverages a novel code-guided synthesis pipeline to generate 1.5 million diverse chart samples spanning 24 chart types and 6 plotting libraries. Each sample consists of five aligned components: plotting code, rendered chart image, data table, natural language summary, and question-answering with reasoning, providing fine-grained cross-modal alignment. To capture the full spectrum of chart comprehension, ChartNet additionally includes specialized subsets encompassing human annotated data, real-world data, safety, and grounding. Moreover, a rigorous quality-filtering pipeline ensures visual fidelity, semantic accuracy, and diversity across chart representations. Fine-tuning on ChartNet consistently improves results across benchmarks, demonstrating its utility as large-scale supervision for multimodal models. As the largest open-source dataset of its kind, ChartNet aims to support the development of foundation models with robust and generalizable capabilities for data visualization understanding. The dataset is publicly available at https://huggingface.co/datasets/ibm-granite/ChartNet