🤖 AI Summary
This work addresses the limited fine-grained understanding of scientific charts by multimodal large language models (MLLMs). To tackle this, we propose a dual-path learning paradigm that jointly models spatial structure and textual semantics: (1) visual layout modeling via chart element localization, and (2) semantic-geometric co-modeling through a novel “chart → executable code” generation task. Based on this framework, we introduce CS-Bench—the first benchmark dedicated to chart spatial understanding—and design an LLM-driven pipeline for code generation and synthetic data augmentation. Extensive experiments demonstrate that our approach significantly outperforms existing state-of-the-art methods across multiple chart understanding benchmarks, while exhibiting strong generalization across model scales. The proposed paradigm advances scientific document parsing by enabling more precise, executable, and spatially grounded multimodal reasoning.
📝 Abstract
Chart understanding is crucial for deploying multimodal large language models (MLLMs) in real-world scenarios such as analyzing scientific papers and technical reports. Unlike natural images, charts pair a structured visual layout (spatial property) with an underlying data representation (textual property) -- grasping both is essential for precise, fine-grained chart reasoning. Motivated by this observation, we propose START, the Spatial and Textual learning for chART understanding. Specifically, we introduce (i) chart-element grounding and (ii) chart-to-code generation to strengthen an MLLM's understanding of both chart visual layout and data details. To facilitate spatial and textual learning, we propose the START-Dataset generated with a novel data-generation pipeline that first leverages an MLLM to translate real chart images into executable chart code, recovering the underlying data representation while preserving the visual distribution of real-world charts. We then evolve the code with a Large Language Model (LLM) to ascertain the positions of chart elements that capture the chart's visual structure, addressing challenges that existing methods cannot handle. To evaluate a model's ability to understand chart spatial structures, we propose the Chart Spatial understanding Benchmark (CS-Bench), filling a critical gap in comprehensive chart understanding evaluation. Leveraging spatial and textual learning, START delivers consistent gains across model sizes and benchmarks over the base models and surpasses prior state-of-the-art by a clear margin. Code, data and models will be publicly available.