Epistemic diversity across language models mitigates knowledge collapse

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Widespread AI deployment risks “knowledge collapse”—a convergence of learned representations toward dominant, homogenized patterns. Method: We propose a multi-model collaborative self-training framework that partitions training data across heterogeneous language models and jointly optimizes their evolution through iterative co-adaptation, explicitly modeling ecosystem-level dynamics. Contribution/Results: We empirically identify an optimal threshold for cognitive diversity: insufficient diversity yields impoverished representations, while excessive diversity impedes individual model convergence. This leads to a novel, tunable paradigm linking ecosystem diversity to knowledge decay. Using distributional bias-based diversity metrics and ecosystem-level performance evaluation, ten rounds of experiments demonstrate that moderate diversity—corresponding to an optimal number of constituent models—significantly mitigates performance degradation. Our findings provide critical empirical evidence for mitigating single-origin risks in AI development.

Technology Category

Application Category

📝 Abstract
The growing use of artificial intelligence (AI) raises concerns of knowledge collapse, i.e., a reduction to the most dominant and central set of ideas. Prior work has demonstrated single-model collapse, defined as performance decay in an AI model trained on its own output. Inspired by ecology, we ask whether AI ecosystem diversity, that is, diversity among models, can mitigate such a collapse. We build on the single-model approach but focus on ecosystems of models trained on their collective output. To study the effect of diversity on model performance, we segment the training data across language models and evaluate the resulting ecosystems over ten, self-training iterations. We find that increased epistemic diversity mitigates collapse, but, interestingly, only up to an optimal level. Our results suggest that an ecosystem containing only a few diverse models fails to express the rich mixture of the full, true distribution, resulting in rapid performance decay. Yet distributing the data across too many models reduces each model's approximation capacity on the true distribution, leading to poor performance already in the first iteration step. In the context of AI monoculture, our results suggest the need to monitor diversity across AI systems and to develop policies that incentivize more domain- and community-specific models.
Problem

Research questions and friction points this paper is trying to address.

Mitigate knowledge collapse in AI models
Assess effect of epistemic diversity on performance
Determine optimal model diversity to prevent decay
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse language models mitigate knowledge collapse
Segment training data across multiple model ecosystems
Optimal epistemic diversity prevents performance decay
🔎 Similar Papers
No similar papers found.
D
Damian Hodel
Center for an Informed Public, Information School, University of Washington, USA
Jevin D. West
Jevin D. West
Professor, Information School, University of Washington
MisinformationScience of ScienceData Science