On the Expressive Power of Mixture-of-Experts for Structured Complex Tasks

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Understanding the expressive power of Mixture-of-Experts (MoE) models for functions supported on low-dimensional manifolds and exhibiting sparse, structured piecewise smoothness. Method: Integrating tools from approximation theory, manifold learning, and piecewise smooth analysis to develop a rigorous theoretical framework for MoE representation capacity. Contribution/Results: (1) We prove that shallow MoE models efficiently approximate functions on low-dimensional manifolds, circumventing the curse of dimensionality. (2) Deep MoE architectures with only *L* layers and *E* experts per layer exactly represent composite functions comprising up to *E^L* structured sparse pieces—achieving exponential piecewise approximation. We quantitatively characterize how gating mechanisms, expert capacity, depth, and combinatorial sparsity govern expressivity, establishing an interpretable mapping between MoE architectural parameters and data structural priors (i.e., low-dimensionality and sparsity). This provides principled theoretical guidance for efficient MoE architecture design and hyperparameter selection.

Technology Category

Application Category

📝 Abstract
Mixture-of-experts networks (MoEs) have demonstrated remarkable efficiency in modern deep learning. Despite their empirical success, the theoretical foundations underlying their ability to model complex tasks remain poorly understood. In this work, we conduct a systematic study of the expressive power of MoEs in modeling complex tasks with two common structural priors: low-dimensionality and sparsity. For shallow MoEs, we prove that they can efficiently approximate functions supported on low-dimensional manifolds, overcoming the curse of dimensionality. For deep MoEs, we show that $cO(L)$-layer MoEs with $E$ experts per layer can approximate piecewise functions comprising $E^L$ pieces with compositional sparsity, i.e., they can exhibit an exponential number of structured tasks. Our analysis reveals the roles of critical architectural components and hyperparameters in MoEs, including the gating mechanism, expert networks, the number of experts, and the number of layers, and offers natural suggestions for MoE variants.
Problem

Research questions and friction points this paper is trying to address.

Understanding MoEs' ability to model complex structured tasks
Analyzing shallow MoEs for low-dimensional manifold approximation
Exploring deep MoEs for piecewise functions with sparsity
Innovation

Methods, ideas, or system contributions that make the work stand out.

MoEs efficiently approximate low-dimensional manifold functions
Deep MoEs model exponential structured tasks sparsely
Analysis guides MoE architecture and hyperparameter optimization
🔎 Similar Papers
No similar papers found.
Mingze Wang
Mingze Wang
School of Mathematical Sciences, Peking University
Machine Learning TheoryDeep Learning TheoryOptimization
E
E. Weinan
Center for Machine Learning Research and School of Mathematical Sciences, Peking University, Beijing, China, AI for Science Institute, Beijing, China