🤖 AI Summary
The AI development bottleneck has shifted from model architecture to data availability; existing methods lack the capability to autonomously discover and synthesize datasets satisfying specific, real-world requirements.
Method: We introduce DatasetResearch—the first demand-driven benchmark for dataset discovery—featuring a three-dimensional evaluation framework that integrates deep research agents, large-scale web retrieval, structured data generation, and multi-faceted human evaluation across 208 authentic requirement scenarios.
Contribution: We establish the first rigorous baseline, revealing a fundamental capability gap between search and synthesis agents. Our analysis shows that even the state-of-the-art deep research system achieves only 22% accuracy on the challenging out-of-distribution subset DatasetResearch-pro, exposing its critical limitations on “corner-case” requirements. The benchmark is publicly released to foster the tight integration of autonomous dataset retrieval and generation.
📝 Abstract
The rapid advancement of large language models has fundamentally shifted the bottleneck in AI development from computational power to data availability-with countless valuable datasets remaining hidden across specialized repositories, research appendices, and domain platforms. As reasoning capabilities and deep research methodologies continue to evolve, a critical question emerges: can AI agents transcend conventional search to systematically discover any dataset that meets specific user requirements, enabling truly autonomous demand-driven data curation? We introduce DatasetResearch, the first comprehensive benchmark evaluating AI agents' ability to discover and synthesize datasets from 208 real-world demands across knowledge-intensive and reasoning-intensive tasks. Our tri-dimensional evaluation framework reveals a stark reality: even advanced deep research systems achieve only 22% score on our challenging DatasetResearch-pro subset, exposing the vast gap between current capabilities and perfect dataset discovery. Our analysis uncovers a fundamental dichotomy-search agents excel at knowledge tasks through retrieval breadth, while synthesis agents dominate reasoning challenges via structured generation-yet both catastrophically fail on "corner cases" outside existing distributions. These findings establish the first rigorous baseline for dataset discovery agents and illuminate the path toward AI systems capable of finding any dataset in the digital universe. Our benchmark and comprehensive analysis provide the foundation for the next generation of self-improving AI systems and are publicly available at https://github.com/GAIR-NLP/DatasetResearch.