More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural networks for multi-task learning suffer from representation interference, computational redundancy, poor generalization, and limited zero-shot adaptability—particularly in dynamic settings without explicit task identifiers. To address these challenges, we propose COMET, a sparse modular architecture based on Conditional Overlapping Experts (COE). Its core innovation lies in employing fixed random projections for input-dependent lightweight routing—eliminating learnable gating mechanisms to prevent representation collapse—and enabling adaptive expert overlap proportional to input similarity, thereby balancing efficiency, generalization, and scalability. COMET achieves significant improvements in single-step learning speed and cross-sample generalization across image classification, language modeling, and regression tasks. Moreover, it supports large-scale, task-ID-free deployment with flexible adaptation to unseen tasks.

Technology Category

Application Category

📝 Abstract
The evolution of biological neural systems has led to both modularity and sparse coding, which enables energy efficiency and robustness across the diversity of tasks in the lifespan. In contrast, standard neural networks rely on dense, non-specialized architectures, where all model parameters are simultaneously updated to learn multiple tasks, leading to interference. Current sparse neural network approaches aim to alleviate this issue but are hindered by limitations such as 1) trainable gating functions that cause representation collapse, 2) disjoint experts that result in redundant computation and slow learning, and 3) reliance on explicit input or task IDs that limit flexibility and scalability. In this paper we propose Conditionally Overlapping Mixture of ExperTs (COMET), a general deep learning method that addresses these challenges by inducing a modular, sparse architecture with an exponential number of overlapping experts. COMET replaces the trainable gating function used in Sparse Mixture of Experts with a fixed, biologically inspired random projection applied to individual input representations. This design causes the degree of expert overlap to depend on input similarity, so that similar inputs tend to share more parameters. This results in faster learning ev{per update step} and improved ev{out-of-sample} generalization. We demonstrate the effectiveness of COMET on a range of tasks, including image classification, language modeling, and regression, using several popular deep learning architectures.
Problem

Research questions and friction points this paper is trying to address.

Neural Networks
Task Interference
Adaptability to Unseen Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular Neural Network
Exponential Scaling
Fixed Random Projections
🔎 Similar Papers
No similar papers found.
S
Sagi Shaier
Department of Computer Science, University of Colorado Boulder
F
Francisco Pereira
Machine Learning Core, National Institute of Mental Health
K
K. Wense
Department of Computer Science, University of Colorado Boulder; Institute of Computer Science, Johannes Gutenberg University Mainz
L
Lawrence E Hunter
Department of Pediatrics, University of Chicago
M
Matt Jones
Department of Psychology and Neuroscience, University of Colorado Boulder