🤖 AI Summary
To address the challenges of unifying diverse image-conditioned generation tasks—namely, modeling complexity and parameter explosion—this paper proposes the first lightweight, single-stage unified diffusion framework. Our method jointly models the joint distribution of image pairs (e.g., RGB-depth) and supports generation, estimation, joint synthesis, signal-guided synthesis, and coarse-grained control—all within a single model, with only a 15% parameter increase, no architectural modifications, and no auxiliary networks. Key innovations include: (i) native support for non-spatially-aligned and coarse-grained conditioning inputs, breaking away from conventional multi-stage or multi-model paradigms; and (ii) a flexible sampling strategy enabling zero-overhead task switching. Experiments demonstrate that our single model matches or exceeds the performance of task-specific baselines, significantly outperforms existing unified approaches, and seamlessly integrates heterogeneous conditioning signals—advancing the practicality of controllable image generation.
📝 Abstract
Recent progress in image generation has sparked research into controlling these models through condition signals, with various methods addressing specific challenges in conditional generation. Instead of proposing another specialized technique, we introduce a simple, unified framework to handle diverse conditional generation tasks involving a specific image-condition correlation. By learning a joint distribution over a correlated image pair (e.g. image and depth) with a diffusion model, our approach enables versatile capabilities via different inference-time sampling schemes, including controllable image generation (e.g. depth to image), estimation (e.g. image to depth), signal guidance, joint generation (image&depth), and coarse control. Previous attempts at unification often introduce significant complexity through multi-stage training, architectural modification, or increased parameter counts. In contrast, our simple formulation requires a single, computationally efficient training stage, maintains the standard model input, and adds minimal learned parameters (15% of the base model). Moreover, our model supports additional capabilities like non-spatially aligned and coarse conditioning. Extensive results show that our single model can produce comparable results with specialized methods and better results than prior unified methods. We also demonstrate that multiple models can be effectively combined for multi-signal conditional generation.