TITAN-Guide: Taming Inference-Time AligNment for Guided Text-to-Video Diffusion Models

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of poor controllability, high memory overhead, and insufficient precision of existing training-free guidance methods in text-to-video (T2V) diffusion model inference, this paper proposes TITAN-Guide: a backpropagation-free forward gradient descent guidance framework. Our core innovation lies in performing efficient, memory-light forward optimization directly in the diffusion latent space, leveraging a discriminative guidance model and a directional instruction optimization strategy. By bypassing backpropagation, TITAN-Guide avoids both GPU memory bottlenecks and coarse-grained gradient estimation, enabling fine-grained alignment control during inference. Experiments demonstrate that TITAN-Guide substantially reduces GPU memory consumption while outperforming both state-of-the-art training-free and fine-tuning-based guidance methods across multiple T2V guidance benchmarks—achieving superior trade-offs between inference efficiency and generation quality.

Technology Category

Application Category

📝 Abstract
In the recent development of conditional diffusion models still require heavy supervised fine-tuning for performing control on a category of tasks. Training-free conditioning via guidance with off-the-shelf models is a favorable alternative to avoid further fine-tuning on the base model. However, the existing training-free guidance frameworks either have heavy memory requirements or offer sub-optimal control due to rough estimation. These shortcomings limit the applicability to control diffusion models that require intense computation, such as Text-to-Video (T2V) diffusion models. In this work, we propose Taming Inference Time Alignment for Guided Text-to-Video Diffusion Model, so-called TITAN-Guide, which overcomes memory space issues, and provides more optimal control in the guidance process compared to the counterparts. In particular, we develop an efficient method for optimizing diffusion latents without backpropagation from a discriminative guiding model. In particular, we study forward gradient descents for guided diffusion tasks with various options on directional directives. In our experiments, we demonstrate the effectiveness of our approach in efficiently managing memory during latent optimization, while previous methods fall short. Our proposed approach not only minimizes memory requirements but also significantly enhances T2V performance across a range of diffusion guidance benchmarks. Code, models, and demo are available at https://titanguide.github.io.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory usage in guided text-to-video diffusion models
Improving control accuracy without supervised finetuning
Optimizing diffusion latents efficiently without backpropagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free guidance with off-the-shelf models
Forward gradient descent for latent optimization
Efficient memory management in diffusion guidance
🔎 Similar Papers
No similar papers found.