Statistical Independence Aware Caching for LLM Workflows

๐Ÿ“… 2025-11-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
While local LLM caching reduces cost, improves efficiency, and enhances reproducibility, blind reuse of cached responses violates statistical independenceโ€”a foundational assumption for probabilistic reasoning and code generation tasks (e.g., Pass@k evaluation, uncertainty estimation, and iterative program repair). Existing caching systems lack mechanisms to enforce statistical constraints. Method: We propose Mnimi, a novel caching design paradigm targeting statistical integrity: it encapsulates statistical independence as an LLM-specific reference type, enabling type-driven constraint propagation across components via typed operations; in Python, it is realized through composable decorators and lazy iterators for fine-grained, semantics-aware cache control. Results: Evaluated on the SpecFix program repair system, Mnimi strictly preserves statistical correctness while significantly reducing inference overhead and latency, and simultaneously improving system maintainability and debugging efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) inference is both expensive and slow. Local caching of responses offers a practical solution to reduce the cost and latency of LLM queries. In research contexts, caching also enhances reproducibility and provides flexibility for experimentation. However, naive reuse of cached responses compromises statistical independence, a critical property for probabilistic workflows. In applications of LLM for code, it underpins performance metrics such as Pass@k and uncertainty estimation, as well as algorithms like program repair loops and retries. Existing LLM caching systems lack ways to enforce statistical independence constraints. To address this, we introduce Mnimi, a cache design pattern that supports modular LLM workflows while ensuring statistical integrity at the component level. Its core innovation lies in encapsulating statistical constraints within the type of LLM references, allowing users to manage and transform these types according to the scope and requirements of their algorithm. We implemented this design pattern in Python using a combination of decorators and iterators over infinite sequences. A case study on SpecFix, an recent automated program specification repair system, highlights how Mnimi improves reproducibility, ease of debugging, time and cost efficiency while preserving statistical correctness.
Problem

Research questions and friction points this paper is trying to address.

Caching LLM responses compromises statistical independence
Existing caching systems lack statistical independence enforcement
Mnimi ensures statistical integrity in modular LLM workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Caching design pattern ensures statistical independence in workflows
Encapsulates constraints within LLM reference types for management
Uses decorators and iterators over infinite sequences in Python
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yihan Dai
Peking University, Beijing, China
D
Dimitrios Stamatios Bouras
Peking University, Beijing, China
Haoxiang Jia
Haoxiang Jia
Peking University
software engineering
Sergey Mechtaev
Sergey Mechtaev
Peking University
Program RepairProgram Analysis