🤖 AI Summary
Widespread AI deployment risks “knowledge collapse”—a convergence of learned representations toward dominant, homogenized patterns. Method: We propose a multi-model collaborative self-training framework that partitions training data across heterogeneous language models and jointly optimizes their evolution through iterative co-adaptation, explicitly modeling ecosystem-level dynamics. Contribution/Results: We empirically identify an optimal threshold for cognitive diversity: insufficient diversity yields impoverished representations, while excessive diversity impedes individual model convergence. This leads to a novel, tunable paradigm linking ecosystem diversity to knowledge decay. Using distributional bias-based diversity metrics and ecosystem-level performance evaluation, ten rounds of experiments demonstrate that moderate diversity—corresponding to an optimal number of constituent models—significantly mitigates performance degradation. Our findings provide critical empirical evidence for mitigating single-origin risks in AI development.
📝 Abstract
The growing use of artificial intelligence (AI) raises concerns of knowledge collapse, i.e., a reduction to the most dominant and central set of ideas. Prior work has demonstrated single-model collapse, defined as performance decay in an AI model trained on its own output. Inspired by ecology, we ask whether AI ecosystem diversity, that is, diversity among models, can mitigate such a collapse. We build on the single-model approach but focus on ecosystems of models trained on their collective output. To study the effect of diversity on model performance, we segment the training data across language models and evaluate the resulting ecosystems over ten, self-training iterations. We find that increased epistemic diversity mitigates collapse, but, interestingly, only up to an optimal level. Our results suggest that an ecosystem containing only a few diverse models fails to express the rich mixture of the full, true distribution, resulting in rapid performance decay. Yet distributing the data across too many models reduces each model's approximation capacity on the true distribution, leading to poor performance already in the first iteration step. In the context of AI monoculture, our results suggest the need to monitor diversity across AI systems and to develop policies that incentivize more domain- and community-specific models.