ChartNet: A Million-Scale, High-Quality Multimodal Dataset for Robust Chart Understanding

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models struggle to jointly and effectively comprehend the geometric structures, numerical data, and natural language associated with charts. To address this limitation, this work proposes a code-guided synthetic approach to construct the largest open-source multimodal chart understanding dataset to date, comprising 1.5 million high-quality samples spanning 24 chart types and six plotting libraries. The dataset features fine-grained quintuple alignment among chart images, generation code, underlying data tables, natural language summaries, and question-answer pairs. Rigorous quality control ensures reliability, including expert-annotated, real-world, safety-focused, and localization-specific subsets. Fine-tuned vision-language models leveraging this dataset demonstrate substantial performance gains across multiple benchmarks, validating its efficacy as a strong supervisory signal and advancing the development of robust, generalizable foundation models for visual understanding.
📝 Abstract
Understanding charts requires models to jointly reason over geometric visual patterns, structured numerical data, and natural language -- a capability where current vision-language models (VLMs) remain limited. We introduce ChartNet, a high-quality, million-scale multimodal dataset designed to advance chart interpretation and reasoning. ChartNet leverages a novel code-guided synthesis pipeline to generate 1.5 million diverse chart samples spanning 24 chart types and 6 plotting libraries. Each sample consists of five aligned components: plotting code, rendered chart image, data table, natural language summary, and question-answering with reasoning, providing fine-grained cross-modal alignment. To capture the full spectrum of chart comprehension, ChartNet additionally includes specialized subsets encompassing human annotated data, real-world data, safety, and grounding. Moreover, a rigorous quality-filtering pipeline ensures visual fidelity, semantic accuracy, and diversity across chart representations. Fine-tuning on ChartNet consistently improves results across benchmarks, demonstrating its utility as large-scale supervision for multimodal models. As the largest open-source dataset of its kind, ChartNet aims to support the development of foundation models with robust and generalizable capabilities for data visualization understanding. The dataset is publicly available at https://huggingface.co/datasets/ibm-granite/ChartNet
Problem

Research questions and friction points this paper is trying to address.

chart understanding
vision-language models
multimodal dataset
data visualization
cross-modal alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

code-guided synthesis
multimodal alignment
chart understanding
quality filtering
foundation models
🔎 Similar Papers
No similar papers found.
J
Jovana Kondic
MIT
P
Pengyuan Li
IBM Research
Dhiraj Joshi
Dhiraj Joshi
IBM T. J. Watson Research
Artificial IntelligenceMachine LearningData MiningComputer VisionMultimedia
Isaac Sanchez
Isaac Sanchez
Professor of Chemical Engineering, U of Texas
statistical thermodynamics
B
Ben Wiesel
IBM Research
S
Shafiq Abedin
IBM Research
A
Amit Alfassy
IBM Research
E
Eli Schwartz
IBM Research
D
Daniel Caraballo
IBM Research
Yagmur Gizem Cinar
Yagmur Gizem Cinar
IBM Research
Florian Scheidegger
Florian Scheidegger
IBM, ETH
machine learningdeep learningsoftware engineeringlow precision
S
Steven I. Ross
IBM Research
Daniel Karl I. Weidele
Daniel Karl I. Weidele
IBM Research & University of Konstanz
VisualizationMachine LearningArtificial IntelligenceNetwork ScienceRecommender Systems
Hang Hua
Hang Hua
University of Rochester
Computer VisionNatural Language ProcessingMachine Learning
E
Ekaterina Arutyunova
MIT
Roei Herzig
Roei Herzig
MIT-IBM Lab | BAIR, UC Berkeley
Computer VisionMachine LearningRoboticsArtificial Intelligence
Zexue He
Zexue He
University of California, San Diego
Trustworthy NLPLLM
Z
Zihan Wang
Abaka AI
X
Xinyue Yu
Abaka AI
Yunfei Zhao
Yunfei Zhao
Peking University
intelligent programcode generationcode representation
Sicong Jiang
Sicong Jiang
McGill University, 2077AI
Large Language ModelsVision Language ModelsAutonomous DrivingTrustworthy AI
M
Minghao Liu
Abaka AI
Qunshu Lin
Qunshu Lin
Co-Founder of Abaka.AI
Data-Centric AI
P
Peter Staar
IBM Research
Luis Lastras
Luis Lastras
IBM TJ Watson
Information theorycoding theorymemory systemsnon volatile memorysignal processing