🤖 AI Summary
Existing LLM-based code generation benchmarks inadequately cover and rigorously evaluate real-world data science tasks. Method: We introduce DS-Bench—the first Python data science code generation benchmark built from 1,000 authentic GitHub issues—spanning ten core libraries (e.g., pandas, NumPy), emphasizing task complexity, long-code generation, unambiguous problem specifications, and strong test-based validation. Its construction integrates real-scenario curation, multi-stage automated code/test generation, structured problem rephrasing, and human-in-the-loop verification. Contribution/Results: Experiments reveal that even the state-of-the-art model GPT-4o achieves only 0.202 pass@1, substantially underperforming on simpler benchmarks—demonstrating DS-Bench’s heightened difficulty and representativeness. It fills a critical gap in evaluating realistic data science programming proficiency and provides a robust, scalable metric for assessing model robustness and generalization in practical coding scenarios.
📝 Abstract
We introduce DS-bench, a new benchmark designed to evaluate large language models (LLMs) on complicated and realistic data science code generation tasks. DS-bench consists of 1,000 carefully constructed problems sourced from realistic problems from GitHub across ten widely used Python data science libraries. Compared to the current state-of-the-art benchmark DS-1000, DS-bench offers a more challenging and representative testbed, longer code solutions, more comprehensive data science libraries, clearer and better structured problem descriptions, and stronger test suites. To construct the DS-bench, we develop a robust pipeline that combines task scope selection, code construction, test case generation, and problem description synthesis. The process is paired with rigorous manual editing to ensure alignment and enhance evaluation reliability. Experimental result shows that DS-bench exhibits robust scaling behavior, where larger models systematically outperform smaller ones, validating its ability to distinguish model capabilities. The best LLM we test, GPT-4o, has a pass@1 of 0.202, indicating that LLMs still have a large room to improve for realistic data science code generation tasks. We believe DS-bench will serve as a rigorous and trustworthy foundation for advancing LLM-based data science programming.