🤖 AI Summary
Recursive Transformers suffer from expressive collapse across layers due to parameter sharing, limiting representational capacity and model performance. To address this, we propose a lightweight Mixture-of-LoRAs (MoL) mechanism that enables token-conditioned weight-space modulation within shared feed-forward networks—achieving fine-grained conditional adaptation without decoupling backbone parameters. An expert merging strategy further compresses the MoL into a single LoRA module at inference time, incurring zero overhead. Integrated with rotary embeddings, GeGLU activations, FlashAttention, and knowledge distillation–based initialization, our approach yields ModernALBERT—a family of recursive architectures (50M–120M parameters). Extensive evaluation on GLUE, SQuAD-v2, and BEIR demonstrates consistent superiority over same-scale full-parameter models, establishing new state-of-the-art for compact models and substantially restoring the representational power of recursive architectures.
📝 Abstract
Parameter sharing in recursive transformers reduces model size but collapses layer-wise expressivity. We propose Mixture of LoRAs (MoL), a lightweight conditional-computation mechanism that inserts Low-Rank Adaptation (LoRA) experts inside a shared feed-forward network (FFN). MoL enables token-conditional weight-space modulation of the shared FFN without untying backbone parameters, unlike prior approaches that add fixed or externally attached adapters. We pretrain a modernised recursive architecture, ModernALBERT, integrating rotary embeddings, GeGLU, FlashAttention, and a distillation-based initialisation. Across GLUE, SQuAD-v2, and BEIR, ModernALBERT (50M--120M) achieves state-of-the-art performance among compact models and surpasses larger fully parameterised baselines. We also propose an expert-merging procedure that compresses MoL into a single adapter at inference while preserving accuracy, enabling efficient deployment. Our results show that conditional weight-space modulation effectively restores the expressivity lost under aggressive parameter sharing in recursive transformers.