Directional Gradient Projection for Robust Fine-Tuning of Foundation Models

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address underfitting and hyperparameter sensitivity caused by strong weight constraints in robust fine-tuning of large models, this paper proposes a learnable inter-layer directional gradient projection mechanism that jointly optimizes in-distribution (ID) generalization and out-of-distribution (OOD) robustness. Methodologically, we introduce (1) the first layer-wise gradient projection incorporating directional information; (2) the first multimodal OOD-VQA evaluation framework covering ten levels of distribution shift intensity; and (3) an integrated strategy combining layer-adaptive regularization, multi-objective optimization, and cross-modal distribution shift modeling (Image→VQA). Experiments on image classification and VQA tasks demonstrate consistent improvements in both ID accuracy and OOD robustness. Our approach is compatible with both discriminative and generative backbone architectures and significantly outperforms existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Robust fine-tuning aims to adapt large foundation models to downstream tasks while preserving their robustness to distribution shifts. Existing methods primarily focus on constraining and projecting current model towards the pre-trained initialization based on the magnitudes between fine-tuned and pre-trained weights, which often require extensive hyper-parameter tuning and can sometimes result in underfitting. In this work, we propose Directional Gradient Projection (DiGraP), a novel layer-wise trainable method that incorporates directional information from gradients to bridge regularization and multi-objective optimization. Besides demonstrating our method on image classification, as another contribution we generalize this area to the multi-modal evaluation settings for robust fine-tuning. Specifically, we first bridge the uni-modal and multi-modal gap by performing analysis on Image Classification reformulated Visual Question Answering (VQA) benchmarks and further categorize ten out-of-distribution (OOD) VQA datasets by distribution shift types and degree (i.e. near versus far OOD). Experimental results show that DiGraP consistently outperforms existing baselines across Image Classfication and VQA tasks with discriminative and generative backbones, improving both in-distribution (ID) generalization and OOD robustness.
Problem

Research questions and friction points this paper is trying to address.

Robust fine-tuning of foundation models
Improves generalization and robustness
Bridges uni-modal and multi-modal gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directional Gradient Projection method
Layer-wise trainable approach
Multi-modal evaluation settings
🔎 Similar Papers
No similar papers found.