CatchAll: Repository-Aware Exception Handling with Knowledge-Guided LLMs

πŸ“… 2026-01-03
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that large language models often generate inaccurate or incomplete exception-handling code in warehouse-scale software due to insufficient awareness of contextual and dependency information. To this end, the authors propose CatchAll, a novel approach that systematically integrates API-level exception knowledge, intra-repository calling context, and cross-repository reusable patterns to construct structured prompts that guide large language models toward generating precise exception-handling code. CatchAll achieves multi-granular knowledge injection through API–exception mapping, call trace modeling, and cross-project pattern mining. Evaluated on the newly curated benchmark RepoExEval, CatchAll substantially outperforms existing methods, achieving a CodeBLEU score of 0.31, an intent prediction accuracy of 60.1%, and a Pass@1 rate of 29%.

Technology Category

Application Category

πŸ“ Abstract
Exception handling is a vital forward error-recovery mechanism in many programming languages, enabling developers to manage runtime anomalies through structured constructs (e.g., try-catch blocks). Improper or missing exception handling often leads to severe consequences, including system crashes and resource leaks. While large language models (LLMs) have demonstrated strong capabilities in code generation, they struggle with exception handling at the repository level, due to complex dependencies and contextual constraints. In this work, we propose CatchAll, a novel LLM-based approach for repository-aware exception handling. CatchAll equips LLMs with three complementary layers of exception-handling knowledge: (1) API-level exception knowledge, obtained from an empirically constructed API-exception mapping that characterizes the exception-throwing behaviors of APIs in real-world codebases; (2) repository-level execution context, which captures exception propagation by modeling contextual call traces around the target code; and (3) cross-repository handling knowledge, distilled from reusable exception-handling patterns mined from historical code across projects. The knowledge is encoded into structured prompts to guide the LLM in generating accurate and context-aware exception-handling code. To evaluate CatchAll, we construct two new benchmarks for repository-aware exception handling: a large-scale dataset RepoExEval and an executable subset RepoExEval-Exec. Experiments demonstrate that RepoExEval consistently outperforms state-of-the-art baselines, achieving a CodeBLEU score of 0.31 (vs. 0.27% for the best baseline), intent prediction accuracy of 60.1% (vs. 48.0%), and Pass@1 of 29% (vs. 25%). These results affirm RepoExEval's effectiveness in real-world repository-level exception handling.
Problem

Research questions and friction points this paper is trying to address.

exception handling
repository-aware
large language models
code generation
contextual constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

repository-aware exception handling
knowledge-guided LLMs
API-exception mapping
exception propagation
structured prompting
πŸ”Ž Similar Papers
No similar papers found.