๐ค AI Summary
To address the limited scalability and expressive power of quantum machine learning (QML) models in the NISQ era, this paper proposes QMoEโthe first Quantum Mixture-of-Experts framework. QMoE employs multiple parameterized quantum circuits as โexpertsโ and introduces a learnable quantum routing mechanism that adaptively selects and weights experts based on the input quantum state, enabling input-driven dynamic modeling. By synergistically integrating quantum superposition, entanglement, and classical optimization, QMoE supports end-to-end training within a hybrid classical-quantum paradigm. Experiments demonstrate that QMoE significantly outperforms standard quantum neural networks on quantum classification tasks, achieving superior expressivity, generalization capability, and linear scalability with respect to the number of experts. Moreover, the modular expert structure enhances model interpretability by exposing input-dependent expert activation patterns.
๐ Abstract
Quantum machine learning (QML) has emerged as a promising direction in the noisy intermediate-scale quantum (NISQ) era, offering computational and memory advantages by harnessing superposition and entanglement. However, QML models often face challenges in scalability and expressiveness due to hardware constraints. In this paper, we propose quantum mixture of experts (QMoE), a novel quantum architecture that integrates the mixture of experts (MoE) paradigm into the QML setting. QMoE comprises multiple parameterized quantum circuits serving as expert models, along with a learnable quantum routing mechanism that selects and aggregates specialized quantum experts per input. The empirical results from the proposed QMoE on quantum classification tasks demonstrate that it consistently outperforms standard quantum neural networks, highlighting its effectiveness in learning complex data patterns. Our work paves the way for scalable and interpretable quantum learning frameworks.