π€ AI Summary
This work addresses the lack of standardized, verifiable evaluation frameworks for assessing instruction following and modeling process fidelity in large language models (LLMs) on data science tasks, compounded by the scarcity of high-quality annotated data. To this end, we introduce DARE-bench, the first executable and verifiable benchmark built upon 6,300 Kaggle tasks, enabling evaluation of tool usage and multi-step modeling workflows. DARE-bench establishes the first objective-ground-truth-based assessment framework that jointly evaluates both instruction adherence and process fidelity. Experimental results demonstrate that even leading models such as GPT-4o-mini exhibit limited performance on these tasks, whereas fine-tuning Qwen3-32B on DARE-bench yields a 1.83Γ accuracy improvement, and reinforcement learning further boosts Qwen3-4Bβs performance by over 8Γ, strongly validating the benchmarkβs effectiveness and practical utility.
π Abstract
The fast-growing demands in using Large Language Models (LLMs) to tackle complex multi-step data science tasks create an emergent need for accurate benchmarking. There are two major gaps in existing benchmarks: (i) the lack of standardized, process-aware evaluation that captures instruction adherence and process fidelity, and (ii) the scarcity of accurately labeled training data. To bridge these gaps, we introduce DARE-bench, a benchmark designed for machine learning modeling and data science instruction following. Unlike many existing benchmarks that rely on human- or model-based judges, all tasks in DARE-bench have verifiable ground truth, ensuring objective and reproducible evaluation. To cover a broad range of tasks and support agentic tools, DARE-bench consists of 6,300 Kaggle-derived tasks and provides both large-scale training data and evaluation sets. Extensive evaluations show that even highly capable models such as gpt-o4-mini struggle to achieve good performance, especially in machine learning modeling tasks. Using DARE-bench training tasks for fine-tuning can substantially improve model performance. For example, supervised fine-tuning boosts Qwen3-32B's accuracy by 1.83x and reinforcement learning boosts Qwen3-4B's accuracy by more than 8x. These significant improvements verify the importance of DARE-bench both as an accurate evaluation benchmark and critical training data.