🤖 AI Summary
This work addresses the limitations of current large language model (LLM) agents in handling real-world data analysis tasks that require reasoning over long, heterogeneous documents, a challenge exacerbated by the absence of suitable evaluation benchmarks. To bridge this gap, we introduce LongDA, a benchmark constructed from 17 U.S. national surveys, comprising 505 complex analytical queries that demand cross-document retrieval, information synthesis, and generation of executable code. We also develop LongTA, a tool-augmented agent framework to enable systematic evaluation. Experimental results reveal substantial performance gaps between state-of-the-art open- and closed-source LLMs on this benchmark, underscoring the difficulty of such tasks and highlighting the current limitations of LLM agents in supporting high-stakes decision-making scenarios.
📝 Abstract
We introduce LongDA, a data analysis benchmark for evaluating LLM-based agents under documentation-intensive analytical workflows. In contrast to existing benchmarks that assume well-specified schemas and inputs, LongDA targets real-world settings in which navigating long documentation and complex data is the primary bottleneck. To this end, we manually curate raw data files, long and heterogeneous documentation, and expert-written publications from 17 publicly available U.S. national surveys, from which we extract 505 analytical queries grounded in real analytical practice. Solving these queries requires agents to first retrieve and integrate key information from multiple unstructured documents, before performing multi-step computations and writing executable code, which remains challenging for existing data analysis agents. To support the systematic evaluation under this setting, we develop LongTA, a tool-augmented agent framework that enables document access, retrieval, and code execution, and evaluate a range of proprietary and open-source models. Our experiments reveal substantial performance gaps even among state-of-the-art models, highlighting the challenges researchers should consider before applying LLM agents for decision support in real-world, high-stakes analytical settings.