🤖 AI Summary
This work challenges the prevailing view that “monoculture”—excessive output consistency among large language models—is an intrinsic property of the models themselves. Instead, it demonstrates that assessments of monoculture are inherently subjective and context-dependent, shaped by the analyst’s choice of independence baselines and evaluation contexts. For the first time, we systematically analyze how different null models—such as those incorporating task difficulty—alter conclusions about monoculture. Empirical evaluations on two large-scale benchmarks reveal that varying null models can significantly shift inferences regarding the presence and extent of monoculture, thereby undermining the notion that it constitutes an absolute, model-inherent trait.
📝 Abstract
Machine learning models -- including large language models (LLMs) -- are often said to exhibit monoculture, where outputs agree strikingly often. But what does it actually mean for models to agree too much? We argue that this question is inherently subjective, relying on two key decisions. First, the analyst must specify a baseline null model for what"independence"should look like. This choice is inherently subjective, and as we show, different null models result in dramatically different inferences about excess agreement. Second, we show that inferences depend on the population of models and items under consideration. Models that seem highly correlated in one context may appear independent when evaluated on a different set of questions, or against a different set of peers. Experiments on two large-scale benchmarks validate our theoretical findings. For example, we find drastically different inferences when using a null model with item difficulty compared to previous works that do not. Together, our results reframe monoculture evaluation not as an absolute property of model behavior, but as a context-dependent inference problem.