🤖 AI Summary
Large language models (LLMs) lack systematic, real-world evaluation for automated data science tasks. Method: We introduce the first comprehensive benchmark grounded in authentic commercial interaction data, covering eight canonical tasks—including data cleaning, feature engineering, and model diagnostics—and systematically evaluating three context-engineering strategies: zero-shot prompting, multi-step reasoning, and SmolAgent-based agents. We further quantify prompt sensitivity and the impact of temperature parameterization. Contribution/Results: Experiments across Claude, Gemini, and OpenAI models reveal substantial performance disparities. Crucially, we provide the first empirical evidence that context design quality, prompt robustness, and temperature configuration are decisive factors for practical deployment. Our benchmark establishes a reproducible evaluation framework and actionable optimization pathways for LLM-driven data science.
📝 Abstract
Recent advances in large language models (LLMs) have significantly impacted data science workflows, giving rise to specialized data science agents designed to automate analytical tasks. Despite rapid adoption, systematic benchmarks evaluating the efficacy and limitations of these agents remain scarce. In this paper, we introduce a comprehensive benchmark specifically crafted to reflect real-world user interactions with data science agents by observing usage of our commercial applications. We evaluate three LLMs: Claude-4.0-Sonnet, Gemini-2.5-Flash, and OpenAI-o4-Mini across three approaches: zero-shot with context engineering, multi-step with context engineering, and with SmolAgent. Our benchmark assesses performance across a diverse set of eight data science task categories, additionally exploring the sensitivity of models to common prompting issues, such as data leakage and slightly ambiguous instructions. We further investigate the influence of temperature parameters on overall and task-specific outcomes for each model and approach. Our findings reveal distinct performance disparities among the evaluated models and methodologies, highlighting critical factors that affect practical deployment. The benchmark dataset and evaluation framework introduced herein aim to provide a foundation for future research of more robust and effective data science agents.