Tractable Asymmetric Verification for Large Language Models via Deterministic Replicability

📅 2025-09-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Verifying the factual correctness of large language model (LLM) outputs in multi-agent systems remains computationally expensive and impractical for real-time deployment. Method: This paper proposes an asymmetric verification framework that exploits the deterministic reproducibility of autoregressive models under computation-homogeneous environments. It introduces a lightweight, probabilistic, segmented verification mechanism integrating deterministic reasoning, randomized segment sampling, distributed collaborative auditing, and tunable precision control—ensuring verification cost is substantially lower than generation cost. Contribution/Results: Experiments demonstrate verification throughput over 12× faster than full re-generation, with detection probability and efficiency being flexibly trade-off configurable. Crucially, this work is the first to transform deterministic reproducibility into a scalable, auditable trust infrastructure, establishing a novel paradigm for trustworthy LLM-based multi-agent collaboration.

Technology Category

Application Category

📝 Abstract
The landscape of Large Language Models (LLMs) shifts rapidly towards dynamic, multi-agent systems. This introduces a fundamental challenge in establishing computational trust, specifically how one agent can verify that another's output was genuinely produced by a claimed LLM, and not falsified or generated by a cheaper or inferior model. To address this challenge, this paper proposes a verification framework that achieves tractable asymmetric effort, where the cost to verify a computation is substantially lower than the cost to perform it. Our approach is built upon the principle of deterministic replicability, a property inherent to autoregressive models that strictly necessitates a computationally homogeneous environment where all agents operate on identical hardware and software stacks. Within this defined context, our framework enables multiple validators to probabilistically audit small, random segments of an LLM's output and it distributes the verification workload effectively. The simulations demonstrated that targeted verification can be over 12 times faster than full regeneration, with tunable parameters to adjust the detection probability. By establishing a tractable mechanism for auditable LLM systems, our work offers a foundational layer for responsible AI and serves as a cornerstone for future research into the more complex, heterogeneous multi-agent systems.
Problem

Research questions and friction points this paper is trying to address.

Verifying LLM output authenticity in multi-agent systems
Achieving tractable asymmetric verification with lower cost
Enabling probabilistic audit via deterministic replicability principle
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deterministic replicability for homogeneous environments
Probabilistic auditing of random output segments
Tunable verification faster than full regeneration
🔎 Similar Papers
2024-07-01Conference on Empirical Methods in Natural Language ProcessingCitations: 2
2024-08-01arXiv.orgCitations: 20
2023-10-27IACR Cryptology ePrint ArchiveCitations: 38