Neural Diversity Regularizes Hallucinations in Small Models

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models suffer from significant hallucination even under constrained parameter counts and training data. This work introduces “neural diversity” as a third optimization axis—orthogonal to model size and data volume—and proposes ND-LoRA: a LoRA-based adaptation method incorporating Barlow Twins regularization to enforce decorrelation among parallel representations, thereby enhancing representational diversity. Theoretical analysis and causal intervention studies establish that neural diversity suppresses hallucination in a task-dependent optimal manner. Experiments demonstrate that, under fixed parameter budgets and data scales, ND-LoRA reduces hallucination rates by up to 25.6% (average reduction: 14.6%). Moreover, a quantifiable relationship is identified: each 0.1 increase in neural correlation raises hallucination rate by 3.8%, confirming the method’s controllable, interpretable effect. This work establishes a novel paradigm for improving the reliability of compact language models.

Technology Category

Application Category

📝 Abstract
Language models continue to hallucinate despite increases in parameters, compute, and data. We propose neural diversity -- decorrelated parallel representations -- as a principled mechanism that reduces hallucination rates at fixed parameter and data budgets. Inspired by portfolio theory, where uncorrelated assets reduce risk by $sqrt{P}$, we prove hallucination probability is bounded by representational correlation: $P(H) leq f(σ^2((1-ρ(P))/P + ρ(P)), μ^2)$, which predicts that language models need an optimal amount of neurodiversity. To validate this, we introduce ND-LoRA (Neural Diversity Low-Rank Adaptation), combining parallel LoRA adapters with Barlow Twins regularization, and demonstrate that ND-LoRA reduces hallucinations by up to 25.6% (and 14.6% on average) without degrading general accuracy. Ablations show LoRA adapters and regularization act synergistically, causal interventions prove neurodiversity as the mediating factor and correlational analyses indicate scale: a 0.1% neural correlation increase is associated with a 3.8% hallucination increase. Finally, task-dependent optimality emerges: different tasks require different amounts of optimal neurodiversity. Together, our results highlight neural diversity as a third axis of scaling -- orthogonal to parameters and data -- to improve the reliability of language models at fixed budgets.
Problem

Research questions and friction points this paper is trying to address.

Reducing hallucination rates in language models
Optimizing neural diversity to improve model reliability
Balancing representational correlation for task-dependent performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural diversity reduces hallucinations via decorrelated representations
ND-LoRA combines parallel adapters with Barlow Twins regularization
Optimal neurodiversity varies across different language tasks
🔎 Similar Papers
No similar papers found.