Chart-CoCa: Self-Improving Chart Understanding of Vision LMs via Code-Driven Synthesis and Candidate-Conditioned Answering

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) suffer from inaccurate descriptions and weak reasoning capabilities in chart understanding tasks. To address this, we propose a fully automated, noise-free, code-driven synthetic framework that generates high-quality chart–question–answer triplets by executing interpretable chart-generation code. We further design a candidate-answer-conditioned answering mechanism that fuses multiple responses and dynamically integrates contextual information during inference, enabling self-optimization of VLMs. Crucially, our approach requires no human annotation or external model intervention, establishing a fully self-consistent performance enhancement paradigm. Experiments demonstrate that our method achieves up to a 15.50-percentage-point accuracy gain on mainstream chart understanding benchmarks, significantly improving fine-grained chart perception and multi-step reasoning—particularly for complex charts.

Technology Category

Application Category

📝 Abstract
Vision Language Models (VLMs) often struggle with chart understanding tasks, particularly in accurate chart description and complex reasoning. Synthetic data generation is a promising solution, while usually facing the challenge of noise labels. To address this challenge, we first introduce a chart synthesis pipeline that generates aligned chart-question-answer triplets through code generation and execution, ensuring the reliability of synthetic data without human intervention. Furthermore, inspired by test-time scaling that increases inference budget and thereby improves performance, we design a candidate-conditioned answering process. The VLM first generates multiple responses per query, and then synthesizes the final answer by contextualizing these candidates. Experiments demonstrate significant improvements, with up to 15.50 points accuracy gain over the initial VLM, in a fully self-improving paradigm without either human-labeled data or external models.
Problem

Research questions and friction points this paper is trying to address.

VLMs struggle with chart understanding tasks
Synthetic data generation faces noisy label challenges
Improving VLM accuracy without human-labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code-driven chart-question-answer triplet synthesis
Candidate-conditioned answering for contextualization
Self-improving paradigm without human intervention
🔎 Similar Papers
No similar papers found.