🤖 AI Summary
Vision-language models (VLMs) suffer from inaccurate descriptions and weak reasoning capabilities in chart understanding tasks. To address this, we propose a fully automated, noise-free, code-driven synthetic framework that generates high-quality chart–question–answer triplets by executing interpretable chart-generation code. We further design a candidate-answer-conditioned answering mechanism that fuses multiple responses and dynamically integrates contextual information during inference, enabling self-optimization of VLMs. Crucially, our approach requires no human annotation or external model intervention, establishing a fully self-consistent performance enhancement paradigm. Experiments demonstrate that our method achieves up to a 15.50-percentage-point accuracy gain on mainstream chart understanding benchmarks, significantly improving fine-grained chart perception and multi-step reasoning—particularly for complex charts.
📝 Abstract
Vision Language Models (VLMs) often struggle with chart understanding tasks, particularly in accurate chart description and complex reasoning. Synthetic data generation is a promising solution, while usually facing the challenge of noise labels. To address this challenge, we first introduce a chart synthesis pipeline that generates aligned chart-question-answer triplets through code generation and execution, ensuring the reliability of synthetic data without human intervention. Furthermore, inspired by test-time scaling that increases inference budget and thereby improves performance, we design a candidate-conditioned answering process. The VLM first generates multiple responses per query, and then synthesizes the final answer by contextualizing these candidates. Experiments demonstrate significant improvements, with up to 15.50 points accuracy gain over the initial VLM, in a fully self-improving paradigm without either human-labeled data or external models.