๐ค AI Summary
While local LLM caching reduces cost, improves efficiency, and enhances reproducibility, blind reuse of cached responses violates statistical independenceโa foundational assumption for probabilistic reasoning and code generation tasks (e.g., Pass@k evaluation, uncertainty estimation, and iterative program repair). Existing caching systems lack mechanisms to enforce statistical constraints. Method: We propose Mnimi, a novel caching design paradigm targeting statistical integrity: it encapsulates statistical independence as an LLM-specific reference type, enabling type-driven constraint propagation across components via typed operations; in Python, it is realized through composable decorators and lazy iterators for fine-grained, semantics-aware cache control. Results: Evaluated on the SpecFix program repair system, Mnimi strictly preserves statistical correctness while significantly reducing inference overhead and latency, and simultaneously improving system maintainability and debugging efficiency.
๐ Abstract
Large language models (LLMs) inference is both expensive and slow. Local caching of responses offers a practical solution to reduce the cost and latency of LLM queries. In research contexts, caching also enhances reproducibility and provides flexibility for experimentation. However, naive reuse of cached responses compromises statistical independence, a critical property for probabilistic workflows. In applications of LLM for code, it underpins performance metrics such as Pass@k and uncertainty estimation, as well as algorithms like program repair loops and retries. Existing LLM caching systems lack ways to enforce statistical independence constraints. To address this, we introduce Mnimi, a cache design pattern that supports modular LLM workflows while ensuring statistical integrity at the component level. Its core innovation lies in encapsulating statistical constraints within the type of LLM references, allowing users to manage and transform these types according to the scope and requirements of their algorithm. We implemented this design pattern in Python using a combination of decorators and iterators over infinite sequences. A case study on SpecFix, an recent automated program specification repair system, highlights how Mnimi improves reproducibility, ease of debugging, time and cost efficiency while preserving statistical correctness.