StelLA: Subspace Learning in Low-rank Adaptation using Stiefel Manifold

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-rank adaptation (LoRA) achieves parameter efficiency but underperforms full fine-tuning due to its neglect of the geometric structure of low-rank subspaces. Method: We propose Geometry-aware LoRA (Geo-LoRA), the first method to explicitly model input and output subspaces as orthogonal matrices on the Stiefel manifold, and employ an SVD-inspired three-factor decomposition $U V S^ op$ to decouple subspace geometry from scaling. Training is performed end-to-end via Riemannian optimization, seamlessly integrating with standard Euclidean optimizers without increasing inference overhead. Contribution/Results: Geo-LoRA consistently outperforms existing LoRA variants across diverse tasks—including commonsense reasoning, mathematical and code generation, image classification, and image generation—achieving state-of-the-art performance. Our results empirically validate that incorporating geometric priors significantly enhances the representational capacity of low-rank adaptation.

Technology Category

Application Category

📝 Abstract
Low-rank adaptation (LoRA) has been widely adopted as a parameter-efficient technique for fine-tuning large-scale pre-trained models. However, it still lags behind full fine-tuning in performance, partly due to its insufficient exploitation of the geometric structure underlying low-rank manifolds. In this paper, we propose a geometry-aware extension of LoRA that uses a three-factor decomposition $U!SV^ op$. Analogous to the structure of singular value decomposition (SVD), it separates the adapter's input and output subspaces, $V$ and $U$, from the scaling factor $S$. Our method constrains $U$ and $V$ to lie on the Stiefel manifold, ensuring their orthonormality throughout the training. To optimize on the Stiefel manifold, we employ a flexible and modular geometric optimization design that converts any Euclidean optimizer to a Riemannian one. It enables efficient subspace learning while remaining compatible with existing fine-tuning pipelines. Empirical results across a wide range of downstream tasks, including commonsense reasoning, math and code generation, image classification, and image generation, demonstrate the superior performance of our approach against the recent state-of-the-art variants of LoRA. Code is available at https://github.com/SonyResearch/stella.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LoRA performance by exploiting low-rank manifold geometry
Constraining subspaces to Stiefel manifold for orthonormal adaptation
Enabling efficient subspace learning while maintaining compatibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses three-factor decomposition analogous to SVD
Constrains subspaces on orthonormal Stiefel manifold
Converts Euclidean optimizers to Riemannian optimization
🔎 Similar Papers
2024-06-16Conference on Empirical Methods in Natural Language ProcessingCitations: 17
Z
Zhizhong Li
Sony AI, Zurich, Switzerland
S
Sina Sajadmanesh
Sony AI, Zurich, Switzerland
J
Jingtao Li
Sony AI, Zurich, Switzerland
Lingjuan Lyu
Lingjuan Lyu
Sony
Foundation ModelsFederated LearningResponsible AI