DABstep: Data Agent Benchmark for Multi-step Reasoning

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating AI agents’ capabilities in realistic, multi-step data analysis tasks. To this end, we introduce the Multi-step Data Reasoning Benchmark (MDRB), the first benchmark specifically designed for assessing multi-step data reasoning—comprising over 450 complex, real-world questions sourced from financial analytics platforms. MDRB requires models to jointly perform code-based data processing and contextual reasoning over heterogeneous, multi-source documents, incorporating an iterative problem-solving protocol and a fact-based automatic evaluation framework. Experimental results reveal that state-of-the-art AI agents achieve only 14.55% accuracy on the most difficult tasks, highlighting critical bottlenecks in long-horizon reasoning, precise execution fidelity, and cross-modal integration. We publicly release the benchmark, evaluation toolkit, and a live leaderboard, establishing a standardized infrastructure for rigorous, reproducible assessment of AI agents’ data reasoning capabilities.

Technology Category

Application Category

📝 Abstract
We introduce DABstep, a novel benchmark for evaluating AI agents on realistic multi-step data analysis tasks. DABstep comprises over 450 real-world challenges derived from a financial analytics platform, requiring models to combine code-based data processing with contextual reasoning over heterogeneous documentation. Each task demands an iterative, multi-step problem-solving approach, testing capabilities in data manipulation, cross-referencing multiple sources, and precise result reporting. The benchmark provides a factoid-style answer format with automatic correctness checks for objective scoring at scale. We evaluate leading LLM-based agents, revealing a substantial performance gap: even the best agent achieves only 14.55% accuracy on the hardest tasks. We detail our benchmark's design, dataset composition, task formulation, evaluation protocol, report baseline results and analyze failure modes. DABstep is released with a public leaderboard and toolkit to accelerate research in autonomous data analysis.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI agents on multi-step data analysis tasks
Combining code-based processing with contextual reasoning
Testing capabilities in data manipulation and cross-referencing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-step data analysis benchmark
Code-based processing with contextual reasoning
Automatic correctness checks for scoring
🔎 Similar Papers
No similar papers found.