Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the multidimensional comprehension capabilities of large language models (LLMs) as adjudicators of debate speeches—specifically assessing argument strength and relevance, logical coherence, and stylistic appropriateness. Method: We introduce the first benchmark dataset for debate evaluation, comprising 600+ human-annotated samples, and formally define and quantify the composite cognitive competencies required for rigorous debate adjudication. Employing a multidimensional scoring framework and comparative analysis, we systematically evaluate state-of-the-art LLMs on adjudication consistency, bias, and persuasive speech generation. Results: While LLMs partially approximate individual human judges’ judgments, their aggregate output distributions significantly diverge from human consensus. Notably, LLMs achieve human-level performance in generating stance-aware, highly persuasive debate speeches. Our core contributions are (1) establishing the first dedicated benchmark for debate evaluation and (2) uncovering fundamental disparities in judgment patterns between LLMs and human adjudicators—moving beyond superficial accuracy-matching paradigms.

Technology Category

Application Category

📝 Abstract
We introduce Debate Speech Evaluation as a novel and challenging benchmark for assessing LLM judges. Evaluating debate speeches requires a deep understanding of the speech at multiple levels, including argument strength and relevance, the coherence and organization of the speech, the appropriateness of its style and tone, and so on. This task involves a unique set of cognitive abilities that have previously received limited attention in systematic LLM benchmarking. To explore such skills, we leverage a dataset of over 600 meticulously annotated debate speeches and present the first in-depth analysis of how state-of-the-art LLMs compare to human judges on this task. Our findings reveal a nuanced picture: while larger models can approximate individual human judgments in some respects, they differ substantially in their overall judgment behavior. We also investigate the ability of frontier LLMs to generate persuasive, opinionated speeches, showing that models may perform at a human level on this task.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM judges via debate speech evaluation
Evaluating argument strength, coherence, and style in speeches
Comparing LLM and human judgment behaviors in debate analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Debate Speech Evaluation benchmark
Uses 600 annotated debate speeches dataset
Compares LLM and human judgment behavior
🔎 Similar Papers
No similar papers found.