🤖 AI Summary
This work proposes a parameter-free MoE-style fine-tuning approach that circumvents the inefficiency of existing parameter-efficient fine-tuning (PEFT) methods incorporating mixture-of-experts (MoE) mechanisms, which typically introduce additional trainable routers and expert parameters. Instead, our method treats pre-existing adapters within the Transformer—such as those in QKV projections and up/down projections—as implicit experts and employs a gradient-free token routing strategy based on parameter-free k-means clustering with exponentially moving averaged cluster centers. This is the first approach to achieve token specialization without introducing any new trainable parameters, thereby alleviating representation cancellation caused by shared adapters. Evaluated across 14 text, 14 image, and 19 video benchmarks, our method matches the performance of current MoE-PEFT approaches while using 7–29× fewer trainable parameters, reducing memory consumption by up to 48%, and accelerating training by 1.5–2×.
📝 Abstract
Mixture-of-experts variants of parameter-efficient fine-tuning enable per-token specialization, but they introduce additional trainable routers and expert parameters, increasing memory usage and training cost. This undermines the core goal of parameter-efficient fine-tuning. We propose Monkey Jump, a method that brings mixture-of-experts-style specialization to parameter-efficient fine-tuning without introducing extra trainable parameters for experts or routers. Instead of adding new adapters as experts, Monkey Jump treats the adapters already present in each Transformer block (such as query, key, value, up, and down projections) as implicit experts and routes tokens among them. Routing is performed using k-means clustering with exponentially moving averaged cluster centers, requiring no gradients and no learned parameters. We theoretically show that token-wise routing increases expressivity and can outperform shared adapters by avoiding cancellation effects. Across multi-task experiments covering 14 text, 14 image, and 19 video benchmarks, Monkey Jump achieves competitive performance with mixture-of-experts-based parameter-efficient fine-tuning methods while using 7 to 29 times fewer trainable parameters, up to 48 percent lower memory consumption, and 1.5 to 2 times faster training. Monkey Jump is architecture-agnostic and can be applied to any adapter-based parameter-efficient fine-tuning method.