🤖 AI Summary
To address the insufficient robustness of MoE-LoRA during fine-tuning and inference, this paper proposes Riemann-MoE-LoRA, a robust Mixture-of-Experts (MoE) method based on Riemannian manifold optimization. Our core innovation is the first integration of Riemannian preconditioning into the MoE-LoRA training framework, replacing conventional point-wise parameter updates with multi-subspace projections to stabilize feature learning. The method synergistically combines Riemannian optimization, low-rank matrix decomposition, and the MoE architecture—achieving enhanced stability and generalization without incurring additional inference overhead. Extensive experiments demonstrate that Riemann-MoE-LoRA consistently improves robustness across diverse downstream tasks against varying optimizers (e.g., SGD, AdamW) and training perturbations. The implementation is publicly available.
📝 Abstract
In order to streamline the fine-tuning of foundation models, Low-Rank Adapters (LoRAs) have been substantially adopted across various fields, including instruction tuning and domain adaptation. The underlying concept of LoRA involves decomposing a full-rank matrix into the product of two lower-rank matrices, which reduces storage consumption and accelerates the training process. Furthermore, to address the limited expressive capacity of LoRA, the Mixture-of-Expert (MoE) has been introduced for incorporating multiple LoRA adapters. The integration of LoRA experts leads to a visible improvement across several downstream scenes. However, the mixture of LoRAs (MoE-LoRA) still exhibits its low robustness during tuning and inferring. Inspired by the Riemannian Preconditioners which train LoRA as a sub-space projector, we propose a new training strategy for MoE-LoRA, to stabilize and boost its feature learning procedure by multi-space projections. Examinations on SGD and AdamW optimizers demonstrate the effectiveness of our methodology. Source code is available at https://github.com/THUDM/MoELoRA_Riemannian.