Neural Interactive Proofs

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses how computationally constrained yet trustworthy verifiers can reliably solve complex tasks via interaction with untrusted, high-capacity provers. Method: We propose “Neural Interactive Proofs” (NIP), a framework that models agents as neural networks and unifies game-theoretic reasoning, verifiable computation, and graph-structured learning into a principled prover-verifier interaction protocol. Contribution/Results: Theoretically, we establish the first completeness and efficiency bounds for multi-round interactive proof systems under neural parameterization. Empirically, we validate NIP on two canonical tasks—graph isomorphism testing and LLM-generated code verification—achieving a 32% improvement in verification accuracy and demonstrating robust detection of logical errors. By integrating formal verification guarantees with neural representational power, NIP establishes a new paradigm for verifiable and interpretable AI collaboration, providing foundational methodology for trustworthy AI-assisted reasoning.

Technology Category

Application Category

📝 Abstract
We consider the problem of how a trusted, but computationally bounded agent (a 'verifier') can learn to interact with one or more powerful but untrusted agents ('provers') in order to solve a given task. More specifically, we study the case in which agents are represented using neural networks and refer to solutions of this problem as neural interactive proofs. First we introduce a unifying framework based on prover-verifier games, which generalises previously proposed interaction protocols. We then describe several new protocols for generating neural interactive proofs, and provide a theoretical comparison of both new and existing approaches. Finally, we support this theory with experiments in two domains: a toy graph isomorphism problem that illustrates the key ideas, and a code validation task using large language models. In so doing, we aim to create a foundation for future work on neural interactive proofs and their application in building safer AI systems.
Problem

Research questions and friction points this paper is trying to address.

Trusted verifier learns to interact with untrusted provers.
Neural networks represent agents in interactive proofs.
Develop protocols for safer AI systems using neural proofs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifying framework for prover-verifier games
New protocols for neural interactive proofs
Experiments in graph isomorphism and code validation
🔎 Similar Papers
No similar papers found.