Optimizing for Persuasion Improves LLM Generalization: Evidence from Quality-Diversity Evolution of Debate Strategies

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) optimized for factual consistency often overfit and exhibit fragile generalization during inference. Method: We propose DebateQD, the first method to isolate and validate the independent contribution of persuasive competition to generalization by fixing the debate protocol while switching the optimization objective—between persuasion and truth-seeking—within a single model. Built upon the Quality-Diversity evolutionary framework, DebateQD employs tournament-based adversarial selection, prompt-based encoding, and a tripartite role assignment (proponent, opponent, judge) to evolve diverse debate styles endogenously. Results: Experiments on the QuALITY benchmark across 7B–72B models show that persuasion-oriented optimization substantially narrows the train-test generalization gap—by up to 13.94%—while simultaneously improving test performance. This demonstrates that competitive persuasion enhances reasoning transferability more effectively than cooperative truth-seeking.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) optimized to output truthful answers often overfit, producing brittle reasoning that fails to generalize. While persuasion-based optimization has shown promise in debate settings, it has not been systematically compared against mainstream truth-based approaches. We introduce DebateQD, a minimal Quality-Diversity (QD) evolutionary algorithm that evolves diverse debate strategies across different categories (rationality, authority, emotional appeal, etc.) through tournament-style competitions where two LLMs debate while a third judges. Unlike previously proposed methods that require a population of LLMs, our approach maintains diversity of opponents through prompt-based strategies within a single LLM architecture, making it more accessible for experiments while preserving the key benefits of population-based optimization. In contrast to prior work, we explicitly isolate the role of the optimization objective by fixing the debate protocol and swapping only the fitness function: persuasion rewards strategies that convince the judge irrespective of truth, whereas truth rewards collaborative correctness. Across three model scales (7B, 32B, 72B parameters) and multiple dataset sizes from the QuALITY benchmark, persuasion-optimized strategies achieve up to 13.94% smaller train-test generalization gaps, while matching or exceeding truth optimization's test performance. These results provide the first controlled evidence that competitive pressure to persuade, rather than seek the truth collaboratively, fosters more transferable reasoning skills, offering a promising path for improving LLM generalization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLMs for persuasion reduces train-test generalization gaps.
Comparing persuasion versus truth-based optimization in debate settings.
Developing diverse debate strategies through evolutionary algorithms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quality-Diversity evolution for debate strategy optimization
Prompt-based strategies enable diversity within single LLM
Persuasion-focused fitness reduces generalization gaps versus truth
🔎 Similar Papers
No similar papers found.