DSAEval: Evaluating Data Science Agents on a Wide Range of Real-World Data Science Problems

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of effective evaluation for data science agents on open-ended, multimodal tasks without definitive answers. We propose DSAEval, the first comprehensive benchmark that integrates multimodal environmental perception, multi-turn interaction, and a three-dimensional assessment of reasoning, code, and results. The framework encompasses 641 real-world problems and 285 diverse datasets. Using this benchmark, we systematically evaluate 11 state-of-the-art agents and find that Claude-Sonnet-4.5 achieves the best overall performance, GPT-5.2 demonstrates the highest efficiency, and MiMo-V2-Flash offers the best cost-effectiveness. Multimodal perception improves performance by 2.04%–11.30% on visual tasks, yet unstructured tasks remain challenging.

Technology Category

Application Category

📝 Abstract
Recent LLM-based data agents aim to automate data science tasks ranging from data analysis to deep learning. However, the open-ended nature of real-world data science problems, which often span multiple taxonomies and lack standard answers, poses a significant challenge for evaluation. To address this, we introduce DSAEval, a benchmark comprising 641 real-world data science problems grounded in 285 diverse datasets, covering both structured and unstructured data (e.g., vision and text). DSAEval incorporates three distinctive features: (1) Multimodal Environment Perception, which enables agents to interpret observations from multiple modalities including text and vision; (2) Multi-Query Interactions, which mirror the iterative and cumulative nature of real-world data science projects; and (3) Multi-Dimensional Evaluation, which provides a holistic assessment across reasoning, code, and results. We systematically evaluate 11 advanced agentic LLMs using DSAEval. Our results show that Claude-Sonnet-4.5 achieves the strongest overall performance, GPT-5.2 is the most efficient, and MiMo-V2-Flash is the most cost-effective. We further demonstrate that multimodal perception consistently improves performance on vision-related tasks, with gains ranging from 2.04% to 11.30%. Overall, while current data science agents perform well on structured data and routine data anlysis workflows, substantial challenges remain in unstructured domains. Finally, we offer critical insights and outline future research directions to advance the development of data science agents.
Problem

Research questions and friction points this paper is trying to address.

data science agents
evaluation benchmark
real-world problems
multimodal data
open-ended tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Environment Perception
Multi-Query Interactions
Multi-Dimensional Evaluation
Data Science Agents
LLM-based Evaluation Benchmark
🔎 Similar Papers
No similar papers found.
M
Maojun Sun
Department of Data Science and Artificial Intelligence, Hong Kong Polytechnic University
Y
Yifei Xie
Department of Data Science and Artificial Intelligence, Hong Kong Polytechnic University
Yue Wu
Yue Wu
Lecturer of East China University of Science and Technology
D2D CommunicationsNumerical OptimisationMobile Edge ComputingData Mining
R
Ruijian Han
Department of Data Science and Artificial Intelligence, Hong Kong Polytechnic University
Binyan Jiang
Binyan Jiang
The Hong Kong Polytechnic University
Statistics
D
Defeng Sun
Department of Applied Mathematics, Hong Kong Polytechnic University
Yancheng Yuan
Yancheng Yuan
Assistant Professor, The Hong Kong Polytechnic University
Optimization AlgorithmsMachine Learning
J
Jian Huang
Department of Data Science and Artificial Intelligence, Hong Kong Polytechnic University; Department of Applied Mathematics, Hong Kong Polytechnic University