🤖 AI Summary
Understanding the expressive power of Mixture-of-Experts (MoE) models for functions supported on low-dimensional manifolds and exhibiting sparse, structured piecewise smoothness.
Method: Integrating tools from approximation theory, manifold learning, and piecewise smooth analysis to develop a rigorous theoretical framework for MoE representation capacity.
Contribution/Results: (1) We prove that shallow MoE models efficiently approximate functions on low-dimensional manifolds, circumventing the curse of dimensionality. (2) Deep MoE architectures with only *L* layers and *E* experts per layer exactly represent composite functions comprising up to *E^L* structured sparse pieces—achieving exponential piecewise approximation. We quantitatively characterize how gating mechanisms, expert capacity, depth, and combinatorial sparsity govern expressivity, establishing an interpretable mapping between MoE architectural parameters and data structural priors (i.e., low-dimensionality and sparsity). This provides principled theoretical guidance for efficient MoE architecture design and hyperparameter selection.
📝 Abstract
Mixture-of-experts networks (MoEs) have demonstrated remarkable efficiency in modern deep learning. Despite their empirical success, the theoretical foundations underlying their ability to model complex tasks remain poorly understood. In this work, we conduct a systematic study of the expressive power of MoEs in modeling complex tasks with two common structural priors: low-dimensionality and sparsity. For shallow MoEs, we prove that they can efficiently approximate functions supported on low-dimensional manifolds, overcoming the curse of dimensionality. For deep MoEs, we show that $cO(L)$-layer MoEs with $E$ experts per layer can approximate piecewise functions comprising $E^L$ pieces with compositional sparsity, i.e., they can exhibit an exponential number of structured tasks. Our analysis reveals the roles of critical architectural components and hyperparameters in MoEs, including the gating mechanism, expert networks, the number of experts, and the number of layers, and offers natural suggestions for MoE variants.