DSBC : Data Science task Benchmarking with Context engineering

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack systematic, real-world evaluation for automated data science tasks. Method: We introduce the first comprehensive benchmark grounded in authentic commercial interaction data, covering eight canonical tasks—including data cleaning, feature engineering, and model diagnostics—and systematically evaluating three context-engineering strategies: zero-shot prompting, multi-step reasoning, and SmolAgent-based agents. We further quantify prompt sensitivity and the impact of temperature parameterization. Contribution/Results: Experiments across Claude, Gemini, and OpenAI models reveal substantial performance disparities. Crucially, we provide the first empirical evidence that context design quality, prompt robustness, and temperature configuration are decisive factors for practical deployment. Our benchmark establishes a reproducible evaluation framework and actionable optimization pathways for LLM-driven data science.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have significantly impacted data science workflows, giving rise to specialized data science agents designed to automate analytical tasks. Despite rapid adoption, systematic benchmarks evaluating the efficacy and limitations of these agents remain scarce. In this paper, we introduce a comprehensive benchmark specifically crafted to reflect real-world user interactions with data science agents by observing usage of our commercial applications. We evaluate three LLMs: Claude-4.0-Sonnet, Gemini-2.5-Flash, and OpenAI-o4-Mini across three approaches: zero-shot with context engineering, multi-step with context engineering, and with SmolAgent. Our benchmark assesses performance across a diverse set of eight data science task categories, additionally exploring the sensitivity of models to common prompting issues, such as data leakage and slightly ambiguous instructions. We further investigate the influence of temperature parameters on overall and task-specific outcomes for each model and approach. Our findings reveal distinct performance disparities among the evaluated models and methodologies, highlighting critical factors that affect practical deployment. The benchmark dataset and evaluation framework introduced herein aim to provide a foundation for future research of more robust and effective data science agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLM-based data science agents' efficacy and limitations
Assess model sensitivity to prompting issues like data leakage
Investigate temperature parameters' impact on task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark evaluates LLMs in data science tasks
Tests zero-shot and multi-step context engineering
Assesses model sensitivity to prompting issues
🔎 Similar Papers
No similar papers found.
R
Ram Mohan Rao Kadiyala
Traversaal.ai
S
Siddhant Gupta
Cohere Labs Community
J
Jebish Purbey
Cohere Labs Community
G
Giulio Martini
Traversaal.ai
S
Suman Debnath
Amazon
Hamza Farooq
Hamza Farooq
Researcher, University of Minnesota, USA.
Signal ProcessingMedical Image ProcessingDiffusion MRIControls