🤖 AI Summary
Conventional Mixture-of-Experts (MoE) models deploy experts in parallel and independently, lacking inter-expert collaboration, which limits representational capacity.
Method: We propose Chain-of-Experts (CoE), a novel architecture that organizes experts into a chain structure within each layer. CoE employs intra-layer multi-step routing and a dynamic iterative routing mechanism to enable tokens to traverse and be re-allocated across multiple experts in several steps. It further introduces an iterative residual structure and a dynamic expert selection algorithm, enabling expansion along a new dimension—“expert iteration depth”—without increasing computational cost.
Results: Experiments on mathematical reasoning tasks show that, under fixed FLOPs, CoE reduces validation loss from 1.20 to 1.12. With only two iterations, it matches the performance of a 3× wider MoE while reducing memory overhead by 17.6%–42%. CoE significantly enhances collaborative efficiency and resource utilization.
📝 Abstract
We propose Chain-of-Experts (CoE), a new Mixture-of-Experts (MoE) architecture that introduces sequential expert communication within each layer. Unlike traditional MoE models, where experts operate independently in parallel, CoE processes tokens iteratively across a chain of experts inside a layer. To support dynamic expert selection across iterations, CoE employs a dedicated router at each iteration step within a layer. This design allows tokens to re-evaluate and select different experts during each iteration, rather than being statically assigned. As a result, CoE introduces a flexible routing mechanism that increases the diversity of expert combinations and enriches the model's representational capacity. CoE demonstrates improved performance under fixed compute: on math reasoning tasks, it reduces validation loss from 1.20 to 1.12 compared to a standard MoE. Beyond performance, CoE offers a new scaling axis: depth through expert iteration, which complements conventional width/depth scaling. For example, using 2x iterations matches the performance of 3x expert selections (in width), while reducing memory usage by 17.6-42% relative to other scaling strategies. Our analysis reveals that CoE's benefits stem from its iterative residual structure and enhanced expert specialization empowered by iterative routing, which together unlock more expressive representations. Code is available at https://github.com/ZihanWang314/coe.