🤖 AI Summary
This paper addresses the “generator-verifier gap”—the inconsistency between generated answers and their corresponding verification outcomes—in large language models (LLMs). We propose RankAlign, a ranking-based alignment training method that rigorously formalizes this gap via score correlation over the full candidate answer set and optimizes for ranking consistency to enable cross-task generalization. RankAlign employs a dual-head collaborative fine-tuning architecture that jointly models generation and verification, and introduces a ranking loss for end-to-end optimization. Experiments demonstrate that RankAlign reduces the generator-verifier gap by 31.8% on average, significantly outperforming diverse baselines. It exhibits strong robustness and generalization across tasks—including question answering and word sense disambiguation—as well as in out-of-domain settings.
📝 Abstract
Although large language models (LLMs) have become generally more capable and accurate across many tasks, some fundamental sources of unreliability remain in their behavior. One key limitation is their inconsistency at reporting the the same information when prompts are changed. In this paper, we consider the discrepancy between a model's generated answer and their own verification of that answer, the generator-validator gap. We define this gap in a more stringent way than prior work: we expect correlation of scores from a generator and a validator over the entire set of candidate answers. We show that according to this measure, a large gap exists in various settings, including question answering, lexical semantics tasks, and next-word prediction. We then propose RankAlign, a ranking-based training method, and show that it significantly closes the gap by 31.8% on average, surpassing all baseline methods. Moreover, this approach generalizes well to out-of-domain tasks and lexical items.