🤖 AI Summary
Large language models (LLMs) suffer from “mode collapse”—a severe deficiency in output diversity and novelty. Method: We introduce NoveltyBench, the first benchmark explicitly designed to evaluate multi-solution generation capability, built upon manually curated real-world user queries and diverse prompting strategies. It systematically evaluates 20 mainstream LLMs using a novel assessment paradigm that jointly prioritizes creativity and quality, incorporating quantitative metrics such as n-gram overlap ratio and semantic distance, validated via multi-response sampling and prompt engineering. Contribution/Results: Our study uncovers the “scale–diversity paradox”: increased model size or standard metric scores do not correlate with improved output distribution diversity. State-of-the-art models exhibit significantly lower diversity than human baselines; within model families, larger variants yield *reduced* diversity; and existing in-context regeneration techniques only marginally alleviate the fundamental issue of distributional sparsity.
📝 Abstract
Language models have demonstrated remarkable capabilities on standard benchmarks, yet they struggle increasingly from mode collapse, the inability to generate diverse and novel outputs. Our work introduces NoveltyBench, a benchmark specifically designed to evaluate the ability of language models to produce multiple distinct and high-quality outputs. NoveltyBench utilizes prompts curated to elicit diverse answers and filtered real-world user queries. Evaluating 20 leading language models, we find that current state-of-the-art systems generate significantly less diversity than human writers. Notably, larger models within a family often exhibit less diversity than their smaller counterparts, challenging the notion that capability on standard benchmarks translates directly to generative utility. While prompting strategies like in-context regeneration can elicit diversity, our findings highlight a fundamental lack of distributional diversity in current models, reducing their utility for users seeking varied responses and suggesting the need for new training and evaluation paradigms that prioritize creativity alongside quality.