TorchLean: Formalizing Neural Networks in Lean

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the semantic gap between execution and verification environments in safety-critical neural networks, which undermines the reliability of formal guarantees. To bridge this gap, the authors introduce TorchLean, a novel framework implemented in Lean 4 that seamlessly integrates PyTorch-style APIs with formal verification, treating neural networks as first-class mathematical objects with unified, precise semantics. Built upon an SSA/DAG-based intermediate representation, TorchLean explicitly models IEEE-754 binary32 floating-point semantics and supports both executable kernels and formally verifiable bound propagation methods—including IBP and CROWN/LiRPA—alongside certificate checking. The framework enables end-to-end verification of diverse properties, including certified robustness, residual bounds for physics-informed neural networks, and Lyapunov stability of neural controllers, and includes a mechanized proof of the universal approximation theorem.

Technology Category

Application Category

📝 Abstract
Neural networks are increasingly deployed in safety- and mission-critical pipelines, yet many verification and analysis results are produced outside the programming environment that defines and runs the model. This separation creates a semantic gap between the executed network and the analyzed artifact, so guarantees can hinge on implicit conventions such as operator semantics, tensor layouts, preprocessing, and floating-point corner cases. We introduce TorchLean, a framework in the Lean 4 theorem prover that treats learned models as first-class mathematical objects with a single, precise semantics shared by execution and verification. TorchLean unifies (1) a PyTorch-style verified API with eager and compiled modes that lower to a shared op-tagged SSA/DAG computation-graph IR, (2) explicit Float32 semantics via an executable IEEE-754 binary32 kernel and proof-relevant rounding models, and (3) verification via IBP and CROWN/LiRPA-style bound propagation with certificate checking. We validate TorchLean end-to-end on certified robustness, physics-informed residual bounds for PINNs, and Lyapunov-style neural controller verification, alongside mechanized theoretical results including a universal approximation theorem. These results demonstrate a semantics-first infrastructure for fully formal, end-to-end verification of learning-enabled systems.
Problem

Research questions and friction points this paper is trying to address.

neural network verification
semantic gap
formal semantics
safety-critical systems
floating-point semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

formal verification
neural networks
Lean 4
floating-point semantics
certified robustness
🔎 Similar Papers
No similar papers found.