🤖 AI Summary
This work addresses the limitations of LoRA fine-tuning in personalized image generation, where fixed-rank adaptation struggles to balance performance and memory efficiency and fails to accommodate varying subject complexities. The authors propose LoRA², the first method to incorporate a variable-rank mechanism into LoRA. Leveraging a variational-inspired importance-ranking strategy, LoRA² dynamically assigns adaptive ranks to different layers during fine-tuning, allowing each layer’s rank to evolve freely according to its contribution. This approach overcomes the rigidity of conventional fixed-rank designs. Experimental results across 29 subjects demonstrate that LoRA² outperforms high-rank LoRA, achieving a superior trade-off among DINO, CLIP-I, and CLIP-T metrics while significantly reducing both memory consumption and average rank size.
📝 Abstract
Low Rank Adaptation (LoRA) is the de facto fine-tuning strategy to generate personalized images from pre-trained diffusion models. Choosing a good rank is extremely critical, since it trades off performance and memory consumption, but today the decision is often left to the community's consensus, regardless of the personalized subject's complexity. The reason is evident: the cost of selecting a good rank for each LoRA component is combinatorial, so we opt for practical shortcuts such as fixing the same rank for all components. In this paper, we take a first step to overcome this challenge. Inspired by variational methods that learn an adaptive width of neural networks, we let the ranks of each layer freely adapt during fine-tuning on a subject. We achieve it by imposing an ordering of importance on the rank's positions, effectively encouraging the creation of higher ranks when strictly needed. Qualitatively and quantitatively, our approach, LoRA$^2$, achieves a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects while requiring much less memory and lower rank than high rank LoRA versions. Code: https://github.com/donaldssh/NotAllLayersAreCreatedEqual.