Answer Matching Outperforms Multiple Choice for Language Model Evaluation

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multiple-choice benchmarks (e.g., MMLU, GPQA) are widely adopted for language model evaluation due to their objective scoring; however, they suffer from severe shortcut learning—models often answer correctly without comprehending the question stem, resulting in poor agreement with human judgments. This paper identifies the root cause as an inherent limitation of discriminative evaluation and proposes a generative answer-matching paradigm: using large language models as automated judges to assess semantic alignment between freely generated model responses and reference answers. Our method achieves high inter-rater reliability with human scorers (Cohen’s κ > 0.85) on MMLU-Pro and GPQA-Diamond for the first time, substantially revising existing model rankings. Key contributions include: (1) exposing systematic vulnerabilities in multiple-choice evaluation; (2) establishing a scalable, high-fidelity, near-human generative evaluation framework; and (3) providing a more reliable benchmark for language model assessment.

Technology Category

Application Category

📝 Abstract
Multiple choice benchmarks have long been the workhorse of language model evaluation because grading multiple choice is objective and easy to automate. However, we show multiple choice questions from popular benchmarks can often be answered without even seeing the question. These shortcuts arise from a fundamental limitation of discriminative evaluation not shared by evaluations of the model's free-form, generative answers. Until recently, there appeared to be no viable, scalable alternative to multiple choice--but, we show that this has changed. We consider generative evaluation via what we call answer matching: Give the candidate model the question without the options, have it generate a free-form response, then use a modern language model with the reference answer to determine if the response matches the reference. To compare the validity of different evaluation strategies, we annotate MMLU-Pro and GPQA-Diamond to obtain human grading data, and measure the agreement of each evaluation approach. We find answer matching using recent models--even small ones--achieves near-perfect agreement, in the range of inter-annotator agreement. In contrast, both multiple choice evaluation and using LLM-as-a-judge without reference answers aligns poorly with human grading. Improving evaluations via answer matching is not merely a conceptual concern: the rankings of several models change significantly when evaluating their free-form responses with answer matching. In light of these findings, we discuss how to move the evaluation ecosystem from multiple choice to answer matching.
Problem

Research questions and friction points this paper is trying to address.

Multiple choice benchmarks have limitations in evaluating language models.
Answer matching provides a more valid alternative for model evaluation.
Answer matching aligns better with human grading than multiple choice.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative evaluation via answer matching
Modern language model for response matching
Replacing multiple choice with answer matching
🔎 Similar Papers
No similar papers found.