Theoretical Aspects of Bias and Diversity in Minimum Bayes Risk Decoding

📅 2024-10-19
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of theoretical foundation for Minimum Bayes Risk (MBR) decoding in large language model generation. We introduce, for the first time, a bias–diversity decomposition framework: bias quantifies the alignment between a utility function and human evaluation, while diversity measures estimation disagreement across utility functions. Based on this, we design a pseudo-bias metric and propose Metric-augmented MBR (MAMBR), which dynamically weights utility functions to enhance diversity without requiring pseudo-references. Extensive experiments across summarization, translation, and dialogue generation demonstrate that both bias and diversity strongly correlate with generation quality. MAMBR consistently improves standard automatic metrics—including BLEU and BERTScore—across tasks. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Text generation commonly relies on greedy and beam decoding that limit the search space and degrade output quality. Minimum Bayes Risk (MBR) decoding can mitigate this problem by utilizing automatic evaluation metrics and model-generated pseudo-references. Previous studies have conducted empirical analyses to reveal the improvement by MBR decoding, and reported various observations. However, despite these observations, the theoretical relationship between them remains uncertain. To address this, we present a novel theoretical interpretation of MBR decoding from the perspective of bias-diversity decomposition. We decompose errors in the estimated quality of generated hypotheses in MBR decoding into two key factors: bias, which reflects the closeness between utility functions and human evaluations, and diversity, which represents the variation in the estimated quality of utility functions. Our theoretical analysis reveals the difficulty in simultaneously improving both bias and diversity, and highlights the effectiveness of increasing diversity to enhance MBR decoding performance. This analysis verifies the alignment between our theoretical insights and the empirical results reported in previous work. Furthermore, to support our theoretical findings, we propose a new metric, pseudo-bias, which approximates the bias term using gold references. We also introduce a new MBR approach, Metric-augmented MBR (MAMBR), which increases diversity by adjusting the behavior of utility functions without altering the pseudo-references. Experimental results across multiple NLP tasks show that the decomposed terms in the bias-diversity decomposition correlate well with performance, and that MAMBR improves text generation quality by modifying utility function behavior. Our code will be available at https://github.com/naist-nlp/mbr-bias-diversity.
Problem

Research questions and friction points this paper is trying to address.

Theoretical understanding of MBR decoding's performance improvements
Bias-diversity trade-off in quality estimation of hypotheses
Diversity's role in explaining inference scaling laws
Innovation

Methods, ideas, or system contributions that make the work stand out.

MBR decoding analyzed via bias-diversity decomposition
Diversity improves MBR decoding performance
Diversity explains inference scaling laws
🔎 Similar Papers
No similar papers found.