Improving Recursive Transformers with Mixture of LoRAs

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Recursive Transformers suffer from expressive collapse across layers due to parameter sharing, limiting representational capacity and model performance. To address this, we propose a lightweight Mixture-of-LoRAs (MoL) mechanism that enables token-conditioned weight-space modulation within shared feed-forward networks—achieving fine-grained conditional adaptation without decoupling backbone parameters. An expert merging strategy further compresses the MoL into a single LoRA module at inference time, incurring zero overhead. Integrated with rotary embeddings, GeGLU activations, FlashAttention, and knowledge distillation–based initialization, our approach yields ModernALBERT—a family of recursive architectures (50M–120M parameters). Extensive evaluation on GLUE, SQuAD-v2, and BEIR demonstrates consistent superiority over same-scale full-parameter models, establishing new state-of-the-art for compact models and substantially restoring the representational power of recursive architectures.

Technology Category

Application Category

📝 Abstract
Parameter sharing in recursive transformers reduces model size but collapses layer-wise expressivity. We propose Mixture of LoRAs (MoL), a lightweight conditional-computation mechanism that inserts Low-Rank Adaptation (LoRA) experts inside a shared feed-forward network (FFN). MoL enables token-conditional weight-space modulation of the shared FFN without untying backbone parameters, unlike prior approaches that add fixed or externally attached adapters. We pretrain a modernised recursive architecture, ModernALBERT, integrating rotary embeddings, GeGLU, FlashAttention, and a distillation-based initialisation. Across GLUE, SQuAD-v2, and BEIR, ModernALBERT (50M--120M) achieves state-of-the-art performance among compact models and surpasses larger fully parameterised baselines. We also propose an expert-merging procedure that compresses MoL into a single adapter at inference while preserving accuracy, enabling efficient deployment. Our results show that conditional weight-space modulation effectively restores the expressivity lost under aggressive parameter sharing in recursive transformers.
Problem

Research questions and friction points this paper is trying to address.

Enhances recursive transformers with lightweight conditional computation
Restores expressivity lost in parameter-shared transformer layers
Enables efficient deployment via expert-merging for inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of LoRAs enables token-conditional weight modulation
ModernALBERT integrates rotary embeddings and FlashAttention for efficiency
Expert-merging compresses MoL into a single adapter for deployment
🔎 Similar Papers
No similar papers found.
M
Mohammadmahdi Nouriborji
NLPIE Research, UK
Morteza Rohanian
Morteza Rohanian
University of Zurich
O
Omid Rohanian
Department of Engineering Science, University of Oxford, UK