NeuroTrails: Training with Dynamic Sparse Heads as the Key to Effective Ensembling

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Model ensembles improve generalization and robustness but incur prohibitive computational overhead; existing lightweight ensemble methods struggle to balance efficiency and performance. This paper proposes NeuroTrails—a dynamic sparse multi-head architecture that enables efficient ensemble learning via adaptive evolution of neural pathways during training. Its key contributions are: (1) the first dynamic sparse head topology evolution mechanism, which autonomously steers individual pathways into a “sweet spot” of predictive diversity, achieving high-robustness lightweight ensembling in a model-agnostic framework; (2) integration of dynamic sparse training, multi-head parameter sharing, and cross-architecture adaptability (supporting both CNNs and Transformers). Evaluated on ResNet-50/ImageNet and LLaMA-350M/C4, NeuroTrails significantly improves zero-shot accuracy and out-of-distribution robustness while substantially reducing parameter count and inference cost.

Technology Category

Application Category

📝 Abstract
Model ensembles have long been a cornerstone for improving generalization and robustness in deep learning. However, their effectiveness often comes at the cost of substantial computational overhead. To address this issue, state-of-the-art methods aim to replicate ensemble-class performance without requiring multiple independently trained networks. Unfortunately, these algorithms often still demand considerable compute at inference. In response to these limitations, we introduce $ extbf{NeuroTrails}$, a sparse multi-head architecture with dynamically evolving topology. This unexplored model-agnostic training paradigm improves ensemble performance while reducing the required resources. We analyze the underlying reason for its effectiveness and observe that the various neural trails induced by dynamic sparsity attain a $ extit{Goldilocks zone}$ of prediction diversity. NeuroTrails displays efficacy with convolutional and transformer-based architectures on computer vision and language tasks. Experiments on ResNet-50/ImageNet, LLaMA-350M/C4, among many others, demonstrate increased accuracy and stronger robustness in zero-shot generalization, while requiring significantly fewer parameters.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in model ensembles
Improving ensemble performance with dynamic sparsity
Enhancing accuracy and robustness with fewer parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sparse heads for efficient ensembling
Model-agnostic training with evolving topology
Achieves Goldilocks zone of prediction diversity