AutoLoRA: Automatic LoRA Retrieval and Fine-Grained Gated Fusion for Text-to-Image Generation

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key challenges in customizing text-to-image large models with LoRA—sparse metadata, poor zero-shot adaptability, and suboptimal multi-LoRA fusion—this paper proposes a semantic-driven framework for automatic retrieval and dynamic fusion. Methodologically, it introduces (1) a weight-encoded LoRA retriever that enables semantic alignment without access to original training data, and (2) a fine-grained gated fusion mechanism that supports context-aware integration of multiple LoRAs across network layers and diffusion timesteps. The approach unifies low-rank adaptation, semantic space mapping, and dynamic diffusion model integration. Experiments demonstrate substantial improvements in generation quality and customization flexibility, enabling efficient, scalable, and zero-shot multi-LoRA collaboration. The framework establishes a data-efficient, plug-and-play enhancement paradigm for the open-source LoRA ecosystem.

Technology Category

Application Category

📝 Abstract
Despite recent advances in photorealistic image generation through large-scale models like FLUX and Stable Diffusion v3, the practical deployment of these architectures remains constrained by their inherent intractability to parameter fine-tuning. While low-rank adaptation (LoRA) have demonstrated efficacy in enabling model customization with minimal parameter overhead, the effective utilization of distributed open-source LoRA modules faces three critical challenges: sparse metadata annotation, the requirement for zero-shot adaptation capabilities, and suboptimal fusion strategies for multi-LoRA fusion strategies. To address these limitations, we introduce a novel framework that enables semantic-driven LoRA retrieval and dynamic aggregation through two key components: (1) weight encoding-base LoRA retriever that establishes a shared semantic space between LoRA parameter matrices and text prompts, eliminating dependence on original training data, and (2) fine-grained gated fusion mechanism that computes context-specific fusion weights across network layers and diffusion timesteps to optimally integrate multiple LoRA modules during generation. Our approach achieves significant improvement in image generation perfermance, thereby facilitating scalable and data-efficient enhancement of foundational models. This work establishes a critical bridge between the fragmented landscape of community-developed LoRAs and practical deployment requirements, enabling collaborative model evolution through standardized adapter integration.
Problem

Research questions and friction points this paper is trying to address.

Sparse metadata annotation in distributed LoRA modules
Need for zero-shot adaptation capabilities in LoRA
Suboptimal fusion strategies for multi-LoRA integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weight encoding-base LoRA retriever for semantic-driven retrieval
Fine-grained gated fusion mechanism for dynamic aggregation
Context-specific fusion weights across layers and timesteps
🔎 Similar Papers
2024-06-09arXiv.orgCitations: 3