DiffusionBrowser: Interactive Diffusion Previews via Multi-Branch Decoders

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video diffusion models suffer from opaque generation processes, low inference efficiency, and limited interactivity. To address these issues, we propose a lightweight, model-agnostic multi-branch decoding framework that enables real-time preview of RGB frames and scene intrinsics—including depth, surface normals, and motion fields—at arbitrary denoising steps or Transformer layers. Our approach introduces the first interactive mid-generation visualization and guidance mechanism, supporting noise re-injection and cross-modal conditional steering. Furthermore, we provide the first systematic characterization of the progressive assembly of scene structure throughout the diffusion process. Experiments demonstrate that our framework achieves preview speeds exceeding 4× real-time (e.g., <1 second for a 4-second video) while preserving visual fidelity and motion coherence. Crucially, users can intervene at intermediate layers to significantly enhance generation quality and controllability.

Technology Category

Application Category

📝 Abstract
Video diffusion models have revolutionized generative video synthesis, but they are imprecise, slow, and can be opaque during generation -- keeping users in the dark for a prolonged period. In this work, we propose DiffusionBrowser, a model-agnostic, lightweight decoder framework that allows users to interactively generate previews at any point (timestep or transformer block) during the denoising process. Our model can generate multi-modal preview representations that include RGB and scene intrinsics at more than 4$ imes$ real-time speed (less than 1 second for a 4-second video) that convey consistent appearance and motion to the final video. With the trained decoder, we show that it is possible to interactively guide the generation at intermediate noise steps via stochasticity reinjection and modal steering, unlocking a new control capability. Moreover, we systematically probe the model using the learned decoders, revealing how scene, object, and other details are composed and assembled during the otherwise black-box denoising process.
Problem

Research questions and friction points this paper is trying to address.

Enables interactive previews during video diffusion denoising
Generates multi-modal previews faster than real-time
Provides control and insight into black-box generation processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive previews via multi-branch decoders
Real-time multi-modal previews with RGB and intrinsics
Control via stochasticity reinjection and modal steering
🔎 Similar Papers
No similar papers found.