🤖 AI Summary
This paper addresses the quadratic time overhead in the natural abstract machine for positive λ-calculus, arising from environment structure. We introduce a novel abstract machine based on dual slicing—a technique that reconstructs environment representation to decouple variable lookup from renaming chains. This yields the first asymptotically optimal shared evaluation for positive λ-calculus, achieving a linear bound on evaluation steps. Through a combined analysis of operational semantics and focused proof theory, we rigorously establish strong normalization and time optimality of the new machine, fully eliminating the quadratic slowdown induced by renaming chains in prior approaches. Our work provides the first theoretically optimal abstract machine model for efficient execution of structured λ-calculi.
📝 Abstract
Wu's positive $lambda$-calculus is a recent call-by-value $lambda$-calculus with sharing coming from Miller and Wu's study of the proof-theoretical concept of focalization. Accattoli and Wu showed that it simplifies a technical aspect of the study of sharing; namely it rules out the recurrent issue of renaming chains, that often causes a quadratic time slowdown. In this paper, we define the natural abstract machine for the positive $lambda$-calculus and show that it suffers from an inefficiency: the quadratic slowdown somehow reappears when analyzing the cost of the machine. We then design an optimized machine for the positive $lambda$-calculus, which we prove efficient. The optimization is based on a new slicing technique which is dual to the standard structure of machine environments.