Automatic Rank Determination for Low-Rank Adaptation via Submodular Function Maximization

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatically determining the optimal rank in Low-Rank Adaptation (LoRA) remains challenging—particularly when parameter optimization invalidates linear approximations and renders the Hessian ill-conditioned. Method: This paper proposes SubLoRA, the first rank-selection framework that integrates second-order Hessian information with submodular function maximization. It constructs a quadratic combinatorial objective via closed-form Hessian projection and formulates rank selection as a budget-constrained submodular optimization problem, solved by a greedy algorithm with provable approximation guarantees. The method ensures both theoretical rigor and computational efficiency, enabling joint alternating optimization of rank selection and model training. Results: Evaluated on Physics-Informed Neural Networks (PINNs) for PDE solving, SubLoRA significantly outperforms baselines including AdaLoRA, achieving state-of-the-art performance in both rank estimation accuracy and downstream task performance.

Technology Category

Application Category

📝 Abstract
In this paper, we propose SubLoRA, a rank determination method for Low-Rank Adaptation (LoRA) based on submodular function maximization. In contrast to prior approaches, such as AdaLoRA, that rely on first-order (linearized) approximations of the loss function, SubLoRA utilizes second-order information to capture the potentially complex loss landscape by incorporating the Hessian matrix. We show that the linearization becomes inaccurate and ill-conditioned when the LoRA parameters have been well optimized, motivating the need for a more reliable and nuanced second-order formulation. To this end, we reformulate the rank determination problem as a combinatorial optimization problem with a quadratic objective. However, solving this problem exactly is NP-hard in general. To overcome the computational challenge, we introduce a submodular function maximization framework and devise a greedy algorithm with approximation guarantees. We derive a sufficient and necessary condition under which the rank-determination objective becomes submodular, and construct a closed-form projection of the Hessian matrix that satisfies this condition while maintaining computational efficiency. Our method combines solid theoretical foundations, second-order accuracy, and practical computational efficiency. We further extend SubLoRA to a joint optimization setting, alternating between LoRA parameter updates and rank determination under a rank budget constraint. Extensive experiments on fine-tuning physics-informed neural networks (PINNs) for solving partial differential equations (PDEs) demonstrate the effectiveness of our approach. Results show that SubLoRA outperforms existing methods in both rank determination and joint training performance.
Problem

Research questions and friction points this paper is trying to address.

Determines optimal rank for Low-Rank Adaptation (LoRA)
Uses second-order information for accurate rank selection
Solves NP-hard combinatorial optimization via submodular maximization
Innovation

Methods, ideas, or system contributions that make the work stand out.

SubLoRA uses second-order Hessian matrix
Reformulates rank determination as submodular optimization
Greedy algorithm with approximation guarantees
🔎 Similar Papers
No similar papers found.