🤖 AI Summary
High-quality benchmark datasets for automatic repair of Excel formula semantic runtime errors—such as logical flaws and function misuse—are scarce.
Method: This paper proposes an LLM-based synthetic data construction framework: leveraging real-world forum examples as seeds, few-shot prompting to generate candidate formulas, execution-based validation to ensure semantic correctness, and an LLM-as-a-Judge mechanism for multi-dimensional quality assessment and filtering.
Contribution/Results: We introduce ExcelFixBench, the first manually verified benchmark comprising 618 high-quality samples covering prevalent Excel runtime error types. We further propose a context-aware repair baseline model and a scalable data generation and validation framework applicable to low-resource programming scenarios. Extensive experiments across multiple state-of-the-art LLMs validate the dataset’s utility and reveal fundamental limitations of current models in Excel semantic repair tasks.
📝 Abstract
Excel is a pervasive yet often complex tool, particularly for novice users, where runtime errors arising from logical mistakes or misinterpretations of functions pose a significant challenge. While large language models (LLMs) offer promising assistance by explaining formula errors, the automated correction of these semantic runtime errors remains an open problem. A primary challenge to advancing models for such scenarios is the severe lack of high-quality, comprehensive datasets for training and rigorous evaluation. This paper addresses this gap by introducing a novel approach for constructing a benchmark dataset specifically designed for Excel formula repair. We propose a data generation pipeline, which leverages a small set of curated seed samples from online forums to synthetically expand the dataset. Our pipeline integrates few-shot prompting with LLMs and employs a robust extit{LLM-as-a-Judge} validation framework, combined with execution-based checks to ensure the correctness and semantic fidelity of the generated data. This process produced a benchmark dataset of 618 high-quality samples, covering common runtime errors. Furthermore, we propose a context-aware baseline technique for Excel formula repair that utilizes LLMs to leverage both the faulty formula, and relevant spreadsheet context. We evaluate the performance of various LLMs (GPT-4o, GPT-4.1, Phi-3, Mistral) on our newly generated benchmark using execution-based metrics. Our analysis demonstrates the dataset's quality through manual annotation and provides insights into error and function distributions. The proposed generation methodology is highly scalable and can be readily adapted to create evaluation benchmarks for similar code repair tasks in other low-resource programming languages.