Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating AI agents in real-world enterprise financial accounting—featuring multi-source data entry, cross-sheet computation, formula modeling, multi-document retrieval, and visualization/reporting—remains challenging due to the domain’s complexity and lack of high-fidelity benchmarks. Method: We introduce Finch, the first high-fidelity evaluation benchmark for financial accounting agents. Built upon 15,000 messy spreadsheets and 500,000 raw emails from Enron and similar organizations, Finch employs an LLM-assisted, expert-validated workflow mining paradigm to systematically capture four intrinsic characteristics: long-horizon execution, high collaboration intensity, strong domain-knowledge dependency, and multimodal data heterogeneity. It comprises 172 composite workflows and 384 fine-grained tasks. Evaluation integrates automated scoring (using GPT-5.1, Claude Sonnet 4.5) with human verification, leveraging table parsing, formula reasoning, and structured-output validation. Results: State-of-the-art AI agents succeed on only 38.4% of workflows and 25.0% of tasks, exposing critical bottlenecks—including logical discontinuity, context forgetting, and cross-document inconsistency.

Technology Category

Application Category

📝 Abstract
We introduce a finance & accounting benchmark (Finch) for evaluating AI agents on real-world, enterprise-grade professional workflows -- interleaving data entry, structuring, formatting, web search, cross-file retrieval, calculation, modeling, validation, translation, visualization, and reporting. Finch is sourced from authentic enterprise workspaces at Enron (15,000 spreadsheets and 500,000 emails from 150 employees) and other financial institutions, preserving in-the-wild messiness across multimodal artifacts (text, tables, formulas, charts, code, and images) and spanning diverse domains such as budgeting, trading, and asset management. We propose a workflow construction process that combines LLM-assisted discovery with expert annotation: (1) LLM-assisted, expert-verified derivation of workflows from real-world email threads and version histories of spreadsheet files, and (2) meticulous expert annotation for workflows, requiring over 700 hours of domain-expert effort. This yields 172 composite workflows with 384 tasks, involving 1,710 spreadsheets with 27 million cells, along with PDFs and other artifacts, capturing the intrinsically messy, long-horizon, knowledge-intensive, and collaborative nature of real-world enterprise work. We conduct both human and automated evaluations of frontier AI systems including GPT 5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%. Comprehensive case studies further surface the challenges that real-world enterprise workflows pose for AI agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluates AI agents on real-world finance and accounting workflows
Benchmarks AI performance using authentic enterprise data and spreadsheets
Assesses AI on complex, multimodal tasks like calculation and reporting
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-assisted workflow discovery from emails and spreadsheets
Expert annotation of 172 composite workflows with 384 tasks
Evaluation of AI agents on real-world enterprise financial workflows
🔎 Similar Papers
No similar papers found.
H
Haoyu Dong
P
Pengkun Zhang
Y
Yan Gao
X
Xuanyu Dong
Y
Yilin Cheng
M
Mingzhe Lu
A
Adina Yakefu
Shuxin Zheng
Shuxin Zheng
Deputy Director, Zhongguancun Institute of Artificial Intelligence
General AIGenerative AI