Kimina Lean Server: Technical Report

πŸ“… 2025-04-29
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the urgent need for efficient Lean 4 verification in reinforcement learning (RL)-based theorem proving pipelines, this paper designs and implements LeanServerβ€”a lightweight, open-source server. It provides a unified REST API supporting both high-concurrency interactive verification and large-scale batch processing. Our key contributions are: (1) a novel parallel Lean REPL process pool manager with cross-request LRU caching, drastically reducing redundant initialization overhead; and (2) integration of an infotree parser to automatically extract tactic execution states and intermediate proof artifacts. Experiments demonstrate that LeanServer achieves high-throughput verification with low latency: batch processing throughput improves by 3.2Γ— over baseline approaches. The system effectively enables end-to-end automation and closed-loop feedback in RL-for-theorem-proving workflows.

Technology Category

Application Category

πŸ“ Abstract
We introduce the Kimina Lean Server, an open-source project that enables fast and scalable interaction with Lean 4 via a unified REST API, designed as a simple verifier for reinforcement learning pipelines. Built on top of the Lean FRO's LeanREPL, it combines server-side parallelization by managing multiple Lean REPL processes in parallel, with an LRU caching strategy that reuses Lean imports across multiple requests. These features help reduce initialization overhead and allow large-scale batch processing of Lean code. The client-side interface allows users to submit batches of proofs and receive Lean feedback, including extracted tactics and tactic states via infotree processing. These features enable a high-performance, scalable workflow for both interaction and extraction of proofs, tactics, and tactic states. We open source our implementation on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Enables fast, scalable Lean 4 interaction via REST API
Reduces initialization overhead with parallel REPL processes
Facilitates large-scale batch processing of Lean code
Innovation

Methods, ideas, or system contributions that make the work stand out.

REST API for scalable Lean 4 interaction
Server-side parallelization with Lean REPL
LRU caching to reduce initialization overhead
πŸ”Ž Similar Papers
No similar papers found.
M
Marco Dos Santos
Haiming Wang
Haiming Wang
Professor at the School of Information Science and Engineering, Southeast University
Antenna & Radio FrequencyRadio PropagationNonlinear Wireless Communications
H
Hugues de Saxc'e
R
Ran Wang
M
Mantas Baksys
M
Mert Unsal
J
Junqi Liu
Z
Zhengying Liu
J
Jia Li