Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses zero-shot transfer of low-rank adapters (e.g., LoRA) across text-to-image diffusion models—without retraining or access to target-model data. We propose a training-free, architecture-agnostic parameter transfer method: by projecting LoRA weights into the target model’s weight space, we establish a generalizable mapping via subspace similarity modeling and critical-layer alignment. To our knowledge, this is the first approach enabling zero-shot LoRA transfer across heterogeneous diffusion architectures—including SDXL, SD1.5, and FLUX. Experiments demonstrate that transferred adapters achieve performance on par with full fine-tuning, while drastically reducing computational cost and eliminating data dependencies. Our method establishes a new paradigm for efficient, reusable adaptation of diffusion models.

Technology Category

Application Category

📝 Abstract
We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model's weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.
Problem

Research questions and friction points this paper is trying to address.

Enables zero-shot adaptation of fine-tuning in diffusion models
Transfers pre-trained adjustments without additional training data
Overcomes retraining limitations when switching base models
Innovation

Methods, ideas, or system contributions that make the work stand out.

ProLoRA enables zero-shot adaptation in diffusion models
Transfers pre-trained LoRA without additional training data
Projects source adjustments into target model's weight space
🔎 Similar Papers
No similar papers found.