Benchmark Dataset Generation and Evaluation for Excel Formula Repair with LLMs

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality benchmark datasets for automatic repair of Excel formula semantic runtime errors—such as logical flaws and function misuse—are scarce. Method: This paper proposes an LLM-based synthetic data construction framework: leveraging real-world forum examples as seeds, few-shot prompting to generate candidate formulas, execution-based validation to ensure semantic correctness, and an LLM-as-a-Judge mechanism for multi-dimensional quality assessment and filtering. Contribution/Results: We introduce ExcelFixBench, the first manually verified benchmark comprising 618 high-quality samples covering prevalent Excel runtime error types. We further propose a context-aware repair baseline model and a scalable data generation and validation framework applicable to low-resource programming scenarios. Extensive experiments across multiple state-of-the-art LLMs validate the dataset’s utility and reveal fundamental limitations of current models in Excel semantic repair tasks.

Technology Category

Application Category

📝 Abstract
Excel is a pervasive yet often complex tool, particularly for novice users, where runtime errors arising from logical mistakes or misinterpretations of functions pose a significant challenge. While large language models (LLMs) offer promising assistance by explaining formula errors, the automated correction of these semantic runtime errors remains an open problem. A primary challenge to advancing models for such scenarios is the severe lack of high-quality, comprehensive datasets for training and rigorous evaluation. This paper addresses this gap by introducing a novel approach for constructing a benchmark dataset specifically designed for Excel formula repair. We propose a data generation pipeline, which leverages a small set of curated seed samples from online forums to synthetically expand the dataset. Our pipeline integrates few-shot prompting with LLMs and employs a robust extit{LLM-as-a-Judge} validation framework, combined with execution-based checks to ensure the correctness and semantic fidelity of the generated data. This process produced a benchmark dataset of 618 high-quality samples, covering common runtime errors. Furthermore, we propose a context-aware baseline technique for Excel formula repair that utilizes LLMs to leverage both the faulty formula, and relevant spreadsheet context. We evaluate the performance of various LLMs (GPT-4o, GPT-4.1, Phi-3, Mistral) on our newly generated benchmark using execution-based metrics. Our analysis demonstrates the dataset's quality through manual annotation and provides insights into error and function distributions. The proposed generation methodology is highly scalable and can be readily adapted to create evaluation benchmarks for similar code repair tasks in other low-resource programming languages.
Problem

Research questions and friction points this paper is trying to address.

Automated correction of Excel formula semantic runtime errors
Lack of high-quality datasets for training and evaluation
Generating scalable benchmark for low-resource code repair tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic dataset generation pipeline using few-shot LLM prompting
LLM-as-a-Judge validation with execution-based correctness checks
Context-aware baseline technique leveraging spreadsheet context for repair
🔎 Similar Papers
No similar papers found.
Ananya Singha
Ananya Singha
Associate Researcher at Microsoft
NLPAIHCI
Harshita Sahijwani
Harshita Sahijwani
Microsoft
Information RetrievalConversational SystemsNatural Language Processing
W
Walt Williams
Microsoft, United States
E
Emmanuel Aboah Boateng
Microsoft, United States
N
Nick Hausman
Microsoft, United States
M
Miguel Di Luca
Microsoft, United States
K
Keegan Choudhury
Microsoft, United States
C
Chaya Binet
Microsoft, United States
Vu Le
Vu Le
Microsoft
Program SynthesisMachine Learning
T
Tianwei Chen
Microsoft, United States
O
Oryan Rokeah Chen
Microsoft, United States
Sulaiman Vesal
Sulaiman Vesal
Microsoft
Deep LearningMachine LearningLLM/SLMVLM
S
Sadid Hasan
Microsoft, United States