Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image diffusion models face deployment challenges on edge devices when extended to multiple downstream tasks (e.g., editing, super-resolution, inpainting), as they typically require full retraining or introduce substantial parameter overhead. This work proposes an up-cycling adaptation framework that enables multi-task capability without retraining the backbone and with minimal parameter cost. Specifically, it replaces selected feed-forward network (FFN) blocks with lightweight task-specific expert modules and employs dynamic routing to achieve task-aware feature modulation—while keeping the diffusion backbone frozen. The method is fully compatible with standard diffusion architectures and supports efficient on-device inference. Experiments demonstrate that our approach matches the performance of task-specific fine-tuned models across diverse image generation benchmarks, with comparable latency and GFLOPs. It significantly improves pre-trained model reusability and practical deployability on resource-constrained platforms.

Technology Category

Application Category

📝 Abstract
Text-to-image synthesis has witnessed remarkable advancements in recent years. Many attempts have been made to adopt text-to-image models to support multiple tasks. However, existing approaches typically require resource-intensive re-training or additional parameters to accommodate for the new tasks, which makes the model inefficient for on-device deployment. We propose Multi-Task Upcycling (MTU), a simple yet effective recipe that extends the capabilities of a pre-trained text-to-image diffusion model to support a variety of image-to-image generation tasks. MTU replaces Feed-Forward Network (FFN) layers in the diffusion model with smaller FFNs, referred to as experts, and combines them with a dynamic routing mechanism. To the best of our knowledge, MTU is the first multi-task diffusion modeling approach that seamlessly blends multi-tasking with on-device compatibility, by mitigating the issue of parameter inflation. We show that the performance of MTU is on par with the single-task fine-tuned diffusion models across several tasks including image editing, super-resolution, and inpainting, while maintaining similar latency and computational load (GFLOPs) as the single-task fine-tuned models.
Problem

Research questions and friction points this paper is trying to address.

Extend text-to-image models for multi-task capabilities
Reduce resource-intensive re-training for new tasks
Enable on-device deployment with minimal parameter inflation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces FFN layers with smaller expert networks
Uses dynamic routing for multi-task capabilities
Maintains on-device compatibility without parameter inflation
🔎 Similar Papers
No similar papers found.