🤖 AI Summary
Compiler optimizations designed for conventional hardware—relying on assumptions such as cache locality and branch prediction—are ill-suited for zero-knowledge virtual machines (zkVMs), where proof generation cost, not execution speed, dominates performance. Method: This work conducts the first fine-grained empirical analysis of LLVM optimization passes in zkVM contexts, evaluating all 64 standard LLVM passes and conventional optimization levels (-O1 to -O3) across 58 benchmarks on two RISC-V-based zkVMs: RISC Zero and SP1. Contribution/Results: We find that standard optimizations improve proof generation performance by over 40%, yet most individual passes are ineffective or detrimental. Guided by this insight, we propose lightweight, zkVM-aware modifications to key optimization passes—adapting them to the proof-cost model. Experimental results show that our modified optimizations further reduce proof generation time by up to 45%, yielding average proof-time reductions of 4.6% on RISC Zero and 1.0% on SP1.
📝 Abstract
Zero-knowledge proofs (ZKPs) are the cornerstone of programmable cryptography. They enable (1) privacy-preserving and verifiable computation across blockchains, and (2) an expanding range of off-chain applications such as credential schemes. Zero-knowledge virtual machines (zkVMs) lower the barrier by turning ZKPs into a drop-in backend for standard compilation pipelines. This lets developers write proof-generating programs in conventional languages (e.g., Rust or C++) instead of hand-crafting arithmetic circuits. However, these VMs inherit compiler infrastructures tuned for traditional architectures rather than for proof systems. In particular, standard compiler optimizations assume features that are absent in zkVMs, including cache locality, branch prediction, or instruction-level parallelism. Therefore, their impact on proof generation is questionable.
We present the first systematic study of the impact of compiler optimizations on zkVMs. We evaluate 64 LLVM passes, six standard optimization levels, and an unoptimized baseline across 58 benchmarks on two RISC-V-based zkVMs (RISC Zero and SP1). While standard LLVM optimization levels do improve zkVM performance (over 40%), their impact is far smaller than on traditional CPUs, since their decisions rely on hardware features rather than proof constraints. Guided by a fine-grained pass-level analysis, we~emph{slightly} refine a small set of LLVM passes to be zkVM-aware, improving zkVM execution time by up to 45% (average +4.6% on RISC Zero, +1% on SP1) and achieving consistent proving-time gains. Our work highlights the potential of compiler-level optimizations for zkVM performance and opens new direction for zkVM-specific passes, backends, and superoptimizers.