Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Post-training alignment frequently induces mode collapse in large language models (LLMs), primarily due to typicality bias inherent in preference data. This work is the first to identify—on the data level—that such bias is a key driver of reduced response diversity. To address it, we propose Verbalized Sampling: a training-free, inference-time prompting method that explicitly elicits from the model multiple candidate responses alongside their estimated probability distributions. Grounded in cognitive psychology theories of typicality and validated through empirical analysis, our approach employs a structured prompt template to mitigate bias during decoding. Experiments across creative writing, dialogue simulation, open-ended question answering, and synthetic data generation demonstrate 1.6–2.1× improvements in response diversity, with no degradation in factual accuracy or safety. Notably, gains scale with model capability. This work establishes a novel “training-free, prompt-driven” paradigm for enhancing generative diversity in LLMs.

Technology Category

Application Category

📝 Abstract
Post-training alignment often reduces LLM diversity, leading to a phenomenon known as mode collapse. Unlike prior work that attributes this effect to algorithmic limitations, we identify a fundamental, pervasive data-level driver: typicality bias in preference data, whereby annotators systematically favor familiar text as a result of well-established findings in cognitive psychology. We formalize this bias theoretically, verify it on preference datasets empirically, and show that it plays a central role in mode collapse. Motivated by this analysis, we introduce Verbalized Sampling, a simple, training-free prompting strategy to circumvent mode collapse. VS prompts the model to verbalize a probability distribution over a set of responses (e.g., ``Generate 5 jokes about coffee and their corresponding probabilities''). Comprehensive experiments show that VS significantly improves performance across creative writing (poems, stories, jokes), dialogue simulation, open-ended QA, and synthetic data generation, without sacrificing factual accuracy and safety. For instance, in creative writing, VS increases diversity by 1.6-2.1x over direct prompting. We further observe an emergent trend that more capable models benefit more from VS. In sum, our work provides a new data-centric perspective on mode collapse and a practical inference-time remedy that helps unlock pre-trained generative diversity.
Problem

Research questions and friction points this paper is trying to address.

Addresses mode collapse in LLMs caused by typicality bias in preference data
Introduces Verbalized Sampling to enhance diversity without sacrificing accuracy
Improves creative writing, dialogue, and QA through training-free prompting strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Verbalized Sampling prompts probability distribution verbalization
Training-free strategy mitigates mode collapse in LLMs
Method improves diversity across creative writing tasks
🔎 Similar Papers
No similar papers found.
J
Jiayi Zhang
Northeastern University
S
Simon Yu
Northeastern University
Derek Chong
Derek Chong
Stanford University
Anthony Sicilia
Anthony Sicilia
Northeastern University
Machine LearningArtificial Intelligence
M
Michael R. Tomz
Stanford University
C
Christopher D. Manning
Stanford University
W
Weiyan Shi
Northeastern University