One Diffusion to Generate Them All

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 8
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified modeling framework for multi-task image synthesis and understanding. We propose OneDiffusion—the first diffusion-based universal architecture supporting all tasks. Its core innovation lies in reformulating diverse forward tasks (e.g., text/depth/pose/layout generation, deblurring, super-resolution) and inverse tasks (e.g., depth estimation, segmentation, camera pose prediction) as a single, noise-scale-variable frame-sequence generation problem. This is achieved via shared conditional embeddings, serialized frame representations, and a unified noise scheduling scheme—enabling full-task coverage within one model without task-specific modules or resolution constraints. The framework supports zero-shot transfer and instant personalization. It achieves state-of-the-art performance on text-to-image synthesis, multi-view generation, identity-preserving synthesis, depth estimation, and pose prediction, demonstrating strong cross-task generalization even when trained on only medium-scale datasets.

Technology Category

Application Category

📝 Abstract
We introduce OneDiffusion, a versatile, large-scale diffusion model that seamlessly supports bidirectional image synthesis and understanding across diverse tasks. It enables conditional generation from inputs such as text, depth, pose, layout, and semantic maps, while also handling tasks like image deblurring, upscaling, and reverse processes such as depth estimation and segmentation. Additionally, OneDiffusion allows for multi-view generation, camera pose estimation, and instant personalization using sequential image inputs. Our model takes a straightforward yet effective approach by treating all tasks as frame sequences with varying noise scales during training, allowing any frame to act as a conditioning image at inference time. Our unified training framework removes the need for specialized architectures, supports scalable multi-task training, and adapts smoothly to any resolution, enhancing both generalization and scalability. Experimental results demonstrate competitive performance across tasks in both generation and prediction such as text-to-image, multiview generation, ID preservation, depth estimation and camera pose estimation despite relatively small training dataset. Our code and checkpoint are freely available at https://github.com/lehduong/OneDiffusion
Problem

Research questions and friction points this paper is trying to address.

Unified diffusion model for bidirectional image synthesis and understanding
Supports multi-task learning with diverse inputs and outputs
Enables scalable training and adaptation to various resolutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified diffusion model for diverse image tasks
Multi-task training with frame sequence approach
Supports conditional generation and reverse processes
🔎 Similar Papers
No similar papers found.