DashboardQA: Benchmarking Multimodal Agents for Question Answering on Interactive Dashboards

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual question answering (VQA) benchmarks predominantly focus on static charts, failing to evaluate multimodal GUI agents’ reasoning capabilities in realistic, interactive dashboards. To address this gap, we introduce DashboardQA—the first multimodal QA benchmark explicitly designed for interactive dashboards. It is constructed from 112 real-world Tableau Public dashboards and comprises 405 questions spanning three challenging task categories: multi-turn interaction, counterfactual reasoning, and cross-dashboard analysis. Crucially, DashboardQA is the first benchmark to incorporate dynamic interactivity as a core evaluation dimension. We systematically assess leading GUI agents—including Gemini-Pro-2.5 (38.69% accuracy) and OpenAI CUA (22.69% accuracy)—revealing critical limitations in UI element localization, interaction planning, and causal reasoning. By grounding evaluation in authentic analytical workflows, DashboardQA bridges the gap between static-chart benchmarks and real-world dashboard usage, enabling more rigorous and ecologically valid assessment of multimodal agent capabilities.

Technology Category

Application Category

📝 Abstract
Dashboards are powerful visualization tools for data-driven decision-making, integrating multiple interactive views that allow users to explore, filter, and navigate data. Unlike static charts, dashboards support rich interactivity, which is essential for uncovering insights in real-world analytical workflows. However, existing question-answering benchmarks for data visualizations largely overlook this interactivity, focusing instead on static charts. This limitation severely constrains their ability to evaluate the capabilities of modern multimodal agents designed for GUI-based reasoning. To address this gap, we introduce DashboardQA, the first benchmark explicitly designed to assess how vision-language GUI agents comprehend and interact with real-world dashboards. The benchmark includes 112 interactive dashboards from Tableau Public and 405 question-answer pairs with interactive dashboards spanning five categories: multiple-choice, factoid, hypothetical, multi-dashboard, and conversational. By assessing a variety of leading closed- and open-source GUI agents, our analysis reveals their key limitations, particularly in grounding dashboard elements, planning interaction trajectories, and performing reasoning. Our findings indicate that interactive dashboard reasoning is a challenging task overall for all the VLMs evaluated. Even the top-performing agents struggle; for instance, the best agent based on Gemini-Pro-2.5 achieves only 38.69% accuracy, while the OpenAI CUA agent reaches just 22.69%, demonstrating the benchmark's significant difficulty. We release DashboardQA at https://github.com/vis-nlp/DashboardQA
Problem

Research questions and friction points this paper is trying to address.

Benchmarking multimodal agents for interactive dashboard question answering
Assessing vision-language GUI agents' comprehension of real-world dashboards
Evaluating agents' capabilities in grounding, interaction planning, and reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces DashboardQA benchmark for interactive dashboards
Evaluates multimodal GUI agents on interactive visualization tasks
Assesses dashboard element grounding and interaction planning capabilities
🔎 Similar Papers
No similar papers found.