🤖 AI Summary
This work addresses the absence of “investigative intelligence”—the capacity of large language models (LLMs) to autonomously explore data and uncover critical insights without explicit instructions—and the lack of corresponding evaluation benchmarks. It introduces the Deep Data Research (DDR) task, which formally defines and distinguishes investigative intelligence from execution intelligence, and presents DDR-Bench, the first large-scale, verifiable benchmark for open-ended data exploration. Leveraging an LLM-based agent architecture with self-directed goal setting, long-horizon exploration strategies, and checklist-based evaluation, experiments demonstrate that state-of-the-art models exhibit nascent investigative intelligence but remain limited in extended tasks. Their performance relies more on intrinsic reasoning strategies than on model architecture or scale alone.
📝 Abstract
The agency expected of Agentic Large Language Models goes beyond answering correctly, requiring autonomy to set goals and decide what to explore. We term this investigatory intelligence, distinguishing it from executional intelligence, which merely completes assigned tasks. Data Science provides a natural testbed, as real-world analysis starts from raw data rather than explicit queries, yet few benchmarks focus on it. To address this, we introduce Deep Data Research (DDR), an open-ended task where LLMs autonomously extract key insights from databases, and DDR-Bench, a large-scale, checklist-based benchmark that enables verifiable evaluation. Results show that while frontier models display emerging agency, long-horizon exploration remains challenging. Our analysis highlights that effective investigatory intelligence depends not only on agent scaffolding or merely scaling, but also on intrinsic strategies of agentic models.