AutoSDT: Scaling Data-Driven Discovery Tasks Toward Open Co-Scientists

πŸ“… 2025-06-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the critical bottleneck of scarce high-quality training and evaluation data in AI-augmented scientific discovery by proposing a fully automated, data-driven task generation paradigm. Methodologically, it introduces the first end-to-end pipeline integrating LLM ecosystem search with code synthesis, enabling parametric knowledge retrieval, multi-source task filtering, and joint instruction-code generation. We release AutoSDT-5Kβ€”the first large-scale, interdisciplinary, open-source programming dataset for scientific discovery, comprising over 5,000 tasks. Experiments demonstrate a 93% task ecosystem validity rate and a 92.2% functional correctness rate for generated code. The fine-tuned model AutoSDT-Coder-32B achieves performance on par with GPT-4o on ScienceAgentBench and outperforms prior models by 17.4% on DiscoveryBench, significantly advancing the practical deployment of AI scientists.

Technology Category

Application Category

πŸ“ Abstract
Despite long-standing efforts in accelerating scientific discovery with AI, building AI co-scientists remains challenging due to limited high-quality data for training and evaluation. To tackle this data scarcity issue, we present AutoSDT, an automatic pipeline that collects high-quality coding tasks in real-world data-driven discovery workflows. AutoSDT leverages the coding capabilities and parametric knowledge of LLMs to search for diverse sources, select ecologically valid tasks, and synthesize accurate task instructions and code solutions. Using our pipeline, we construct AutoSDT-5K, a dataset of 5,404 coding tasks for data-driven discovery that covers four scientific disciplines and 756 unique Python packages. To the best of our knowledge, AutoSDT-5K is the only automatically collected and the largest open dataset for data-driven scientific discovery. Expert feedback on a subset of 256 tasks shows the effectiveness of AutoSDT: 93% of the collected tasks are ecologically valid, and 92.2% of the synthesized programs are functionally correct. Trained on AutoSDT-5K, the Qwen2.5-Coder-Instruct LLM series, dubbed AutoSDT-Coder, show substantial improvement on two challenging data-driven discovery benchmarks, ScienceAgentBench and DiscoveryBench. Most notably, AutoSDT-Coder-32B reaches the same level of performance as GPT-4o on ScienceAgentBench with a success rate of 7.8%, doubling the performance of its base model. On DiscoveryBench, it lifts the hypothesis matching score to 8.1, bringing a 17.4% relative improvement and closing the gap between open-weight models and GPT-4o.
Problem

Research questions and friction points this paper is trying to address.

Addresses data scarcity in AI-driven scientific discovery tasks
Automates collection of high-quality coding tasks for diverse disciplines
Improves AI model performance on data-driven discovery benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs search diverse sources for tasks
AutoSDT synthesizes accurate code solutions
AutoSDT-5K dataset covers 4 scientific disciplines
πŸ”Ž Similar Papers
No similar papers found.