RankAlign: A Ranking View of the Generator-Validator Gap in Large Language Models

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the “generator-verifier gap”—the inconsistency between generated answers and their corresponding verification outcomes—in large language models (LLMs). We propose RankAlign, a ranking-based alignment training method that rigorously formalizes this gap via score correlation over the full candidate answer set and optimizes for ranking consistency to enable cross-task generalization. RankAlign employs a dual-head collaborative fine-tuning architecture that jointly models generation and verification, and introduces a ranking loss for end-to-end optimization. Experiments demonstrate that RankAlign reduces the generator-verifier gap by 31.8% on average, significantly outperforming diverse baselines. It exhibits strong robustness and generalization across tasks—including question answering and word sense disambiguation—as well as in out-of-domain settings.

Technology Category

Application Category

📝 Abstract
Although large language models (LLMs) have become generally more capable and accurate across many tasks, some fundamental sources of unreliability remain in their behavior. One key limitation is their inconsistency at reporting the the same information when prompts are changed. In this paper, we consider the discrepancy between a model's generated answer and their own verification of that answer, the generator-validator gap. We define this gap in a more stringent way than prior work: we expect correlation of scores from a generator and a validator over the entire set of candidate answers. We show that according to this measure, a large gap exists in various settings, including question answering, lexical semantics tasks, and next-word prediction. We then propose RankAlign, a ranking-based training method, and show that it significantly closes the gap by 31.8% on average, surpassing all baseline methods. Moreover, this approach generalizes well to out-of-domain tasks and lexical items.
Problem

Research questions and friction points this paper is trying to address.

LLMs show inconsistency in reporting same information across prompts
Generator-validator gap exists in various NLP tasks
Propose RankAlign to reduce generator-validator gap effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ranking-based training method RankAlign
Measures generator-validator gap strictly
Improves consistency by 31.8%
🔎 Similar Papers
No similar papers found.