🤖 AI Summary
This work addresses the inefficiency of large language models that uniformly allocate computational resources across all tokens, despite many sequences containing highly predictable segments requiring minimal reasoning. The authors propose a dynamic token merging mechanism that employs a learnable chunking module to identify semantically similar tokens and compress them into conceptual representations. This approach enables implicit, token-level computation allocation while maintaining constant total parameters and activation FLOPs. Integrating a mixture-of-experts (MoE) architecture, learnable chunk partitioning, token similarity metrics, and layer-wise recurrent training, the method achieves performance gains of +0.9, +2.3, and +0.6 points on language pretraining, long-context understanding, and multimodal tasks, respectively. At a compression ratio of R=2, it accelerates the prefill phase by 175%, reduces KV cache memory by 2×, and decreases attention computation by 4×.
📝 Abstract
Large language models allocate uniform computation across all tokens, ignoring that some sequences are trivially predictable while others require deep reasoning. We introduce ConceptMoE, which dynamically merges semantically similar tokens into concept representations, performing implicit token-level compute allocation. A learnable chunk module identifies optimal boundaries by measuring inter-token similarity, compressing sequences by a target ratio $R$ before they enter the compute-intensive concept model. Crucially, the MoE architecture enables controlled evaluation: we reallocate saved computation to match baseline activated FLOPs (excluding attention map computation) and total parameters, isolating genuine architectural benefits. Under these conditions, ConceptMoE consistently outperforms standard MoE across language and vision-language tasks, achieving +0.9 points on language pretraining, +2.3 points on long context understanding, and +0.6 points on multimodal benchmarks. When converting pretrained MoE during continual training with layer looping, gains reach +5.5 points, demonstrating practical applicability. Beyond performance, ConceptMoE reduces attention computation by up to $R^2\times$ and KV cache by $R\times$. At $R=2$, empirical measurements show prefill speedups reaching 175\% and decoding speedups up to 117\% on long sequences. The minimal architectural modifications enable straightforward integration into existing MoE, demonstrating that adaptive concept-level processing fundamentally improves both effectiveness and efficiency of large language models.