Typed Chain-of-Thought: A Curry-Howard Framework for Verifying LLM Reasoning

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Chain-of-Thought (CoT) reasoning lacks formal fidelity guarantees, undermining the interpretability and trustworthiness of large language models. To address this, we introduce— for the first time—the Curry–Howard correspondence into LLM reasoning verification, proposing a novel “reasoning-as-proof” paradigm: natural-language CoT chains are systematically mapped to typed logical proofs, where each reasoning step corresponds to a formal type derivation. Our method integrates type theory, formal logic, and NLP techniques to construct an end-to-end framework for translating informal reasoning into formal, machine-checkable representations. Experiments demonstrate that our approach automatically converts CoT outputs into verifiable, typed programs, generating strong formal certificates. Compared to heuristic explanations, it significantly enhances the verifiability, reliability, and interpretability of model reasoning. This work establishes a rigorous theoretical foundation and practical implementation pathway for trustworthy AI.

Technology Category

Application Category

📝 Abstract
While Chain-of-Thought (CoT) prompting enhances the reasoning capabilities of large language models, the faithfulness of the generated rationales remains an open problem for model interpretability. We propose a novel theoretical lens for this problem grounded in the Curry-Howard correspondence, which posits a direct relationship between formal proofs and computer programs. Under this paradigm, a faithful reasoning trace is analogous to a well-typed program, where each intermediate step corresponds to a typed logical inference. We operationalise this analogy, presenting methods to extract and map the informal, natural language steps of CoT into a formal, typed proof structure. Successfully converting a CoT trace into a well-typed proof serves as a strong, verifiable certificate of its computational faithfulness, moving beyond heuristic interpretability towards formal verification. Our framework provides a methodology to transform plausible narrative explanations into formally verifiable programs, offering a path towards building more reliable and trustworthy AI systems.
Problem

Research questions and friction points this paper is trying to address.

Verifying faithfulness of Chain-of-Thought reasoning traces
Mapping natural language reasoning to formal proof structures
Providing formal verification for LLM interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mapping CoT reasoning to typed proof structures
Using Curry-Howard correspondence for formal verification
Converting natural language steps into verifiable programs
🔎 Similar Papers
No similar papers found.