🤖 AI Summary
Diffusion model fine-tuning is computationally expensive, and existing LoRA methods reuse identical adapters across timesteps, limiting their capacity to model noise dynamics and generalize across temporal regimes. To address this, we propose a two-stage temporal expert collaboration paradigm: first, we train specialized LoRA experts partitioned by timestep intervals; second, we construct an asymmetric mixture architecture wherein a fine-grained, timestep-level LoRA serves as the fixed core expert, synergistically coupled with multi-scale gated contextual experts for noise-aware and cross-scale adaptive fusion. Critically, our design eliminates the need for learnable gating network parameters, substantially reducing computational overhead. Our method achieves state-of-the-art performance across domain adaptation, post-pretraining, and model distillation—demonstrating strong generalization across UNet, DiT, and MM-DiT backbones, as well as image and video modalities—while maintaining high efficiency.
📝 Abstract
Diffusion models have driven the advancement of vision generation over the past years. However, it is often difficult to apply these large models in downstream tasks, due to massive fine-tuning cost. Recently, Low-Rank Adaptation (LoRA) has been applied for efficient tuning of diffusion models. Unfortunately, the capabilities of LoRA-tuned diffusion models are limited, since the same LoRA is used for different timesteps of the diffusion process. To tackle this problem, we introduce a general and concise TimeStep Master (TSM) paradigm with two key fine-tuning stages. In the fostering stage (1-stage), we apply different LoRAs to fine-tune the diffusion model at different timestep intervals. This results in different TimeStep LoRA experts that can effectively capture different noise levels. In the assembling stage (2-stage), we design a novel asymmetrical mixture of TimeStep LoRA experts, via core-context collaboration of experts at multi-scale intervals. For each timestep, we leverage TimeStep LoRA expert within the smallest interval as the core expert without gating, and use experts within the bigger intervals as the context experts with time-dependent gating. Consequently, our TSM can effectively model the noise level via the expert in the finest interval, and adaptively integrate contexts from the experts of other scales, boosting the versatility of diffusion models. To show the effectiveness of our TSM paradigm, we conduct extensive experiments on three typical and popular LoRA-related tasks of diffusion models, including domain adaptation, post-pretraining, and model distillation. Our TSM achieves the state-of-the-art results on all these tasks, throughout various model structures (UNet, DiT and MM-DiT) and visual data modalities (Image, Video), showing its remarkable generalization capacity.