🤖 AI Summary
This work addresses the limited capability of AI agents to perform evidence-based, multi-document reasoning over large-scale heterogeneous documents—encompassing both unstructured text and tabular data—by introducing OfficeQA Pro, the first large-scale multimodal benchmark tailored for enterprise-grade, end-to-end grounded reasoning. The benchmark comprises 89,000 pages of U.S. Treasury Department bulletins, 26 million numeric values, and 133 cross-document reasoning questions. Leveraging Databricks’ ai_parse_document for structured parsing, the study integrates state-of-the-art large language models with retrieval-augmented generation and test-time scaling techniques. Experimental results reveal that even with full access to the corpus, the best-performing models achieve only a 34.1% average accuracy; however, incorporating structured document representations yields a relative improvement of 16.1%, underscoring the benchmark’s critical role in advancing reliable enterprise-level reasoning systems.
📝 Abstract
We introduce OfficeQA Pro, a benchmark for evaluating AI agents on grounded, multi-document reasoning over a large and heterogeneous document corpus. The corpus consists of U.S. Treasury Bulletins spanning nearly 100 years, comprising 89,000 pages and over 26 million numerical values. OfficeQA Pro consists of 133 questions that require precise document parsing, retrieval, and analytical reasoning across both unstructured text and tabular data. Frontier LLMs including Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro Preview achieve less than 5% accuracy on OfficeQA Pro when relying on parametric knowledge, and less than 12% with additional access to the web. When provided directly with the document corpus, frontier agents still struggle on over half of questions, scoring 34.1% on average. We find that providing agents with a structured document representation produced by Databricks'ai_parse_document yields a 16.1% average relative performance gain across agents. We conduct additional ablations to study the effects of model selection, table representation, retrieval strategy, and test-time scaling on performance. Despite these improvements, significant headroom remains before agents can be considered reliable at enterprise-grade grounded reasoning.