🤖 AI Summary
To address the high computational cost and poor generalization in efficient adaptation of foundation models, this paper presents the first systematic survey of Low-Rank Adaptation (LoRA) extensions across broad classes of foundation models—including multimodal and scientific computing models. We propose a unified taxonomy that integrates matrix low-rank decomposition, modular adapter design, gradient-constrained optimization, and cross-task transfer analysis—thereby identifying key theoretical gaps and charting a new direction toward robustness-aware modeling. Covering over 100 state-of-the-art works, we uncover common mechanisms underlying LoRA’s cross-modal transferability and pinpoint critical deployment bottlenecks. Our synthesis delivers a methodological framework and reproducible implementation pathways for lightweight adaptation of general-purpose foundation models, advancing efficient, robust, and scalable model customization paradigms.
📝 Abstract
The rapid advancement of foundation modelslarge-scale neural networks trained on diverse, extensive datasetshas revolutionized artificial intelligence, enabling unprecedented advancements across domains such as natural language processing, computer vision, and scientific discovery. However, the substantial parameter count of these models, often reaching billions or trillions, poses significant challenges in adapting them to specific downstream tasks. Low-Rank Adaptation (LoRA) has emerged as a highly promising approach for mitigating these challenges, offering a parameter-efficient mechanism to fine-tune foundation models with minimal computational overhead. This survey provides the first comprehensive review of LoRA techniques beyond large Language Models to general foundation models, including recent techniques foundations, emerging frontiers and applications of low-rank adaptation across multiple domains. Finally, this survey discusses key challenges and future research directions in theoretical understanding, scalability, and robustness. This survey serves as a valuable resource for researchers and practitioners working with efficient foundation model adaptation.