🤖 AI Summary
Existing personalized visual generation methods—such as LoRA—rely on task-specific data and hour-scale fine-tuning, limiting practical deployment; hypernetwork-based approaches struggle to map fine-grained user prompts accurately to the complex, high-dimensional LoRA parameter distribution. To address this, we propose an efficient prior-prediction framework for personalization. First, we analyze relative parameter changes to uncover structured distribution patterns in LoRA weight updates. Then, we design a two-stage hypernetwork: the first stage predicts the underlying distribution pattern, and the second generates concrete LoRA weights conditioned on it. This decoupled design significantly enhances fine-grained prompt-to-distribution modeling. Experiments demonstrate that our method generates high-fidelity personalized outputs in seconds across diverse tasks and users—accelerating adaptation by over 100× compared to standard LoRA fine-tuning—while maintaining competitive performance.
📝 Abstract
Personalizing visual generative models to meet specific user needs has gained increasing attention, yet current methods like Low-Rank Adaptation (LoRA) remain impractical due to their demand for task-specific data and lengthy optimization. While a few hypernetwork-based approaches attempt to predict adaptation weights directly, they struggle to map fine-grained user prompts to complex LoRA distributions, limiting their practical applicability. To bridge this gap, we propose LoFA, a general framework that efficiently predicts personalized priors for fast model adaptation. We first identify a key property of LoRA: structured distribution patterns emerge in the relative changes between LoRA and base model parameters. Building on this, we design a two-stage hypernetwork: first predicting relative distribution patterns that capture key adaptation regions, then using these to guide final LoRA weight prediction. Extensive experiments demonstrate that our method consistently predicts high-quality personalized priors within seconds, across multiple tasks and user prompts, even outperforming conventional LoRA that requires hours of processing. Project page: https://jaeger416.github.io/lofa/.