🤖 AI Summary
Video diffusion models suffer from geometric inconsistency and poor controllability in 3D/4D generation, while fine-tuning or retraining compromises pretrained knowledge and incurs high computational cost. This paper introduces the first training-free inference-time framework, comprising three core components: recursive optimization, flow-gated fusion, and self-correcting guidance. Leveraging latent-space optical flow analysis, motion-appearance disentangled injection, and dual-path contrastive self-correction, the method enables precise dynamic injection of trajectory priors. Crucially, it preserves the integrity of pretrained knowledge while significantly improving trajectory consistency, visual fidelity, and photorealism. Extensive evaluations across multiple benchmarks demonstrate superior performance and plug-and-play applicability.
📝 Abstract
Recent video diffusion models demonstrate strong potential in spatial intelligence tasks due to their rich latent world priors. However, this potential is hindered by their limited controllability and geometric inconsistency, creating a gap between their strong priors and their practical use in 3D/4D tasks. As a result, current approaches often rely on retraining or fine-tuning, which risks degrading pretrained knowledge and incurs high computational costs. To address this, we propose WorldForge, a training-free, inference-time framework composed of three tightly coupled modules. Intra-Step Recursive Refinement introduces a recursive refinement mechanism during inference, which repeatedly optimizes network predictions within each denoising step to enable precise trajectory injection. Flow-Gated Latent Fusion leverages optical flow similarity to decouple motion from appearance in the latent space and selectively inject trajectory guidance into motion-related channels. Dual-Path Self-Corrective Guidance compares guided and unguided denoising paths to adaptively correct trajectory drift caused by noisy or misaligned structural signals. Together, these components inject fine-grained, trajectory-aligned guidance without training, achieving both accurate motion control and photorealistic content generation. Extensive experiments across diverse benchmarks validate our method's superiority in realism, trajectory consistency, and visual fidelity. This work introduces a novel plug-and-play paradigm for controllable video synthesis, offering a new perspective on leveraging generative priors for spatial intelligence.