Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 12
Influential: 1
📄 PDF
🤖 AI Summary
To address the longstanding trade-off between accuracy and inference efficiency in dense prediction tasks (e.g., depth/normal estimation) on complex images, this paper introduces the first diffusion-based foundation model tailored for high-fidelity dense prediction. Departing from conventional multi-step noise-prediction paradigms, our approach directly regresses dense ground-truth annotations (e.g., pixel-wise depth maps) via a supervised single-step denoising architecture, augmented by a novel Detail Preserver fine-tuning strategy that preserves structural and textural details while enhancing generalization. Experiments demonstrate state-of-the-art zero-shot performance on depth and surface normal estimation, with inference speed accelerated by multiple folds over existing diffusion methods. Moreover, the model natively supports both single- and multi-view joint estimation and seamlessly extends to downstream 3D reconstruction tasks.

Technology Category

Application Category

📝 Abstract
Leveraging the visual priors of pre-trained text-to-image diffusion models offers a promising solution to enhance zero-shot generalization in dense prediction tasks. However, existing methods often uncritically use the original diffusion formulation, which may not be optimal due to the fundamental differences between dense prediction and image generation. In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize. Based on these insights, we introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction. Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance. We also reformulate the diffusion process into a single-step procedure, simplifying optimization and significantly boosting inference speed. Additionally, we introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions. Without scaling up the training data or model capacity, Lotus achieves SoTA performance in zero-shot depth and normal estimation across various datasets. It also enhances efficiency, being significantly faster than most existing diffusion-based methods. Lotus' superior quality and efficiency also enable a wide range of practical applications, such as joint estimation, single/multi-view 3D reconstruction, etc. Project page: https://lotus3d.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Image Prediction
Accuracy Enhancement
Efficient Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lotus Visual Model
Diffusion Process
High-quality Prediction