🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit over-reliance on OCR for chart understanding, leading to numerical hallucinations under sparse annotations and weak visual grounding—particularly in precise spatial localization. To address this, we propose PointCoT, a reasoning framework that dynamically aligns textual reasoning chains with visual regions by generating element-level bounding boxes and re-rendering charts, thereby enhancing structural and proportional grounding. We introduce ChartPoint-SFT-62k, a large-scale, high-quality dataset integrating instruction tuning, chain-of-thought (CoT) reasoning, visual grounding, and automated data synthesis. Leveraging this, we train ChartPointQ2 and ChartPointQ2.5 models. On benchmarks including ChartBench, our models achieve a +5.04% improvement over state-of-the-art methods, demonstrating superior reasoning accuracy and interpretability.
📝 Abstract
Multimodal Large Language Models (MLLMs) have emerged as powerful tools for chart comprehension. However, they heavily rely on extracted content via OCR, which leads to numerical hallucinations when chart textual annotations are sparse. While existing methods focus on scaling instructions, they fail to address the fundamental challenge, i.e., reasoning with visual perception. In this paper, we identify a critical observation: MLLMs exhibit weak grounding in chart elements and proportional relationships, as evidenced by their inability to localize key positions to match their reasoning. To bridge this gap, we propose PointCoT, which integrates reflective interaction into chain-of-thought reasoning in charts. By prompting MLLMs to generate bounding boxes and re-render charts based on location annotations, we establish connections between textual reasoning steps and visual grounding regions. We further introduce an automated pipeline to construct ChartPoint-SFT-62k, a dataset featuring 19.2K high-quality chart samples with step-by-step CoT, bounding box, and re-rendered visualizations. Leveraging this data, we develop two instruction-tuned models, ChartPointQ2 and ChartPointQ2.5, which outperform state-of-the-art across several chart benchmarks, e.g., +5.04% on ChartBench.