π€ AI Summary
To address the urgent need for efficient Lean 4 verification in reinforcement learning (RL)-based theorem proving pipelines, this paper designs and implements LeanServerβa lightweight, open-source server. It provides a unified REST API supporting both high-concurrency interactive verification and large-scale batch processing. Our key contributions are: (1) a novel parallel Lean REPL process pool manager with cross-request LRU caching, drastically reducing redundant initialization overhead; and (2) integration of an infotree parser to automatically extract tactic execution states and intermediate proof artifacts. Experiments demonstrate that LeanServer achieves high-throughput verification with low latency: batch processing throughput improves by 3.2Γ over baseline approaches. The system effectively enables end-to-end automation and closed-loop feedback in RL-for-theorem-proving workflows.
π Abstract
We introduce the Kimina Lean Server, an open-source project that enables fast and scalable interaction with Lean 4 via a unified REST API, designed as a simple verifier for reinforcement learning pipelines. Built on top of the Lean FRO's LeanREPL, it combines server-side parallelization by managing multiple Lean REPL processes in parallel, with an LRU caching strategy that reuses Lean imports across multiple requests. These features help reduce initialization overhead and allow large-scale batch processing of Lean code. The client-side interface allows users to submit batches of proofs and receive Lean feedback, including extracted tactics and tactic states via infotree processing. These features enable a high-performance, scalable workflow for both interaction and extraction of proofs, tactics, and tactic states. We open source our implementation on GitHub.