Preventing the Collapse of Peer Review Requires Verification-First AI

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI-assisted peer review systems are prone to optimizing proxy metrics, leading to misaligned incentives and deviation from scientific truth. This work proposes a “verification-first” AI design paradigm that prioritizes “truth coupling” as its core objective, positioning AI not as a predictor of review scores but as an adversarial auditing tool that generates verifiable evidence to expand effective validation capacity. By integrating high-fidelity sampling-based verification with proxy judgments in a hybrid modeling framework—and employing game-theoretic and information-theoretic analyses—the study uncovers a phase-transition mechanism driven by verification pressure and signal degradation, and derives the theoretical conditions under which incentive collapse occurs. The findings demonstrate that AI systems limited to score prediction incentivize rational agents to optimize proxies, whereas the verification-first architecture can effectively delay or even prevent systemic collapse in peer review.

Technology Category

Application Category

📝 Abstract
This paper argues that AI-assisted peer review should be verification-first rather than review-mimicking. We propose truth-coupling, i.e. how tightly venue scores track latent scientific truth, as the right objective for review tools. We formalize two forces that drive a phase transition toward proxy-sovereign evaluation: verification pressure, when claims outpace verification capacity, and signal shrinkage, when real improvements become hard to separate from noise. In a minimal model that mixes occasional high-fidelity checks with frequent proxy judgment, we derive an explicit coupling law and an incentive-collapse condition under which rational effort shifts from truth-seeking to proxy optimization, even when current decisions still appear reliable. These results motivate actions for tool builders and program chairs: deploy AI as an adversarial auditor that generates auditable verification artifacts and expands effective verification bandwidth, rather than as a score predictor that amplifies claim inflation.
Problem

Research questions and friction points this paper is trying to address.

peer review
AI verification
truth-coupling
proxy-sovereign evaluation
incentive collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

verification-first AI
truth-coupling
peer review collapse
adversarial auditing
proxy-sovereign evaluation
🔎 Similar Papers
No similar papers found.