DataSciBench: An LLM Agent Benchmark for Data Science

๐Ÿ“… 2025-02-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM evaluation benchmarks for data science are constrained by single-task designs, reliance on easily obtainable ground-truth data, and deterministic metricsโ€”failing to reflect real-world complexity and answer uncertainty. This work introduces the first comprehensive benchmark specifically designed for scenarios with inherently uncertain answers, overcoming these limitations. We propose a novel Task-Function-Code (TFC) three-dimensional evaluation framework; develop a semi-automated ground-truth (GT) generation and validation pipeline integrating LLM self-consistency reasoning with human verification; and build a unified evaluation platform supporting API-based, open-source, and code-specialized models. We conduct systematic evaluation across six API models, eight general-purpose open-source models, and nine code-specific models. Results demonstrate that API models consistently outperform others, with DeepSeek-Coder-33B-Instruct emerging as the top-performing open-source model.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper presents DataSciBench, a comprehensive benchmark for evaluating Large Language Model (LLM) capabilities in data science. Recent related benchmarks have primarily focused on single tasks, easily obtainable ground truth, and straightforward evaluation metrics, which limits the scope of tasks that can be evaluated. In contrast, DataSciBench is constructed based on a more comprehensive and curated collection of natural and challenging prompts for uncertain ground truth and evaluation metrics. We develop a semi-automated pipeline for generating ground truth (GT) and validating evaluation metrics. This pipeline utilizes and implements an LLM-based self-consistency and human verification strategy to produce accurate GT by leveraging collected prompts, predefined task types, and aggregate functions (metrics). Furthermore, we propose an innovative Task - Function - Code (TFC) framework to assess each code execution outcome based on precisely defined metrics and programmatic rules. Our experimental framework involves testing 6 API-based models, 8 open-source general models, and 9 open-source code generation models using the diverse set of prompts we have gathered. This approach aims to provide a more comprehensive and rigorous evaluation of LLMs in data science, revealing their strengths and weaknesses. Experimental results demonstrate that API-based models outperform open-sourced models on all metrics and Deepseek-Coder-33B-Instruct achieves the highest score among open-sourced models. We release all code and data at https://github.com/THUDM/DataSciBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM capabilities in data science
Addresses uncertain ground truth and metrics
Assesses diverse code execution outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-automated pipeline for GT generation
LLM-based self-consistency verification strategy
Task-Function-Code framework for evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Dan Zhang
Tsinghua University; Zhipu AI
S
Sining Zhoubian
Tsinghua University; Zhipu AI
M
Min Cai
Zhipu AI
F
Fengzu Li
Tsinghua University
L
Lekang Yang
Tsinghua University
W
Wei Wang
Tsinghua University
T
Tianjiao Dong
University of California, Berkeley
Ziniu Hu
Ziniu Hu
xAI
Machine LearningNeuro-Symbolic AIGraphReinforcement LearningLanguage Models
Jie Tang
Jie Tang
UW Madison
Computed Tomography
Yisong Yue
Yisong Yue
California Institute of Technology; Asari AI; Latitude AI
Machine LearningArtificial Intelligence