Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing parameter-efficient fine-tuning (PEFT) methods, which rely on fixed hyperparameters and exhibit suboptimal performance on long-tailed data such as remote sensing images. To overcome this, we propose MetaPEFT, the first framework to integrate meta-learning into PEFT, enabling dynamic optimization of module insertion positions, layer selection, and module-specific learning rates. This approach facilitates joint adaptive control over diverse PEFT architectures, including LoRA and AdaptFormer. Extensive experiments demonstrate that MetaPEFT significantly improves accuracy on tail classes and enhances cross-spectral transferability. Remarkably, it achieves state-of-the-art performance across three transfer scenarios and five datasets while using only a minimal number of trainable parameters.

Technology Category

Application Category

📝 Abstract
Training large foundation models from scratch for domain-specific applications is almost impossible due to data limits and long-tailed distributions -- taking remote sensing (RS) as an example. Fine-tuning natural image pre-trained models on RS images is a straightforward solution. To reduce computational costs and improve performance on tail classes, existing methods apply parameter-efficient fine-tuning (PEFT) techniques, such as LoRA and AdaptFormer. However, we observe that fixed hyperparameters -- such as intra-layer positions, layer depth, and scaling factors, can considerably hinder PEFT performance, as fine-tuning on RS images proves highly sensitive to these settings. To address this, we propose MetaPEFT, a method incorporating adaptive scalers that dynamically adjust module influence during fine-tuning. MetaPEFT dynamically adjusts three key factors of PEFT on RS images: module insertion, layer selection, and module-wise learning rates, which collectively control the influence of PEFT modules across the network. We conduct extensive experiments on three transfer-learning scenarios and five datasets in both RS and natural image domains. The results show that MetaPEFT achieves state-of-the-art performance in cross-spectral adaptation, requiring only a small amount of trainable parameters and improving tail-class accuracy significantly.
Problem

Research questions and friction points this paper is trying to address.

parameter-efficient fine-tuning
hyperparameter sensitivity
remote sensing
tail-class performance
transfer learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-Learning
Parameter-Efficient Fine-Tuning
Adaptive Hyperparameters
Remote Sensing
Cross-Spectral Adaptation
🔎 Similar Papers
No similar papers found.
Zichen Tian
Zichen Tian
CVML Lab@SMU
computer visiondeep learning
Y
Yaoyao Liu
University of Illinois Urbana-Champaign
Q
Qianru Sun
Singapore Management University