🤖 AI Summary
This work addresses the limitations of existing bidirectional video generation methods, which suffer from weak interactivity and high latency in multi-shot long-form narrative synthesis. The authors propose a causal multi-shot video generation architecture that formulates the task as autoregressively generating the next shot conditioned on historical context, enabling users to dynamically steer the narrative through streaming prompts. Key innovations include a dual-buffer memory mechanism to ensure visual consistency both within and across shots, a two-stage distillation strategy—combining distribution-matching distillation with self-forcing training—to mitigate error accumulation, and enhanced temporal modeling via global/local context caches alongside a RoPE-based discontinuity indicator. The method achieves sub-second latency at 16 FPS on a single GPU while producing video quality comparable to or exceeding that of conventional, slower bidirectional models.
📝 Abstract
Multi-shot video generation is crucial for long narrative storytelling, yet current bidirectional architectures suffer from limited interactivity and high latency. We propose ShotStream, a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation. By reformulating the task as next-shot generation conditioned on historical context, ShotStream allows users to dynamically instruct ongoing narratives via streaming prompts. We achieve this by first fine-tuning a text-to-video model into a bidirectional next-shot generator, which is then distilled into a causal student via Distribution Matching Distillation. To overcome the challenges of inter-shot consistency and error accumulation inherent in autoregressive generation, we introduce two key innovations. First, a dual-cache memory mechanism preserves visual coherence: a global context cache retains conditional frames for inter-shot consistency, while a local context cache holds generated frames within the current shot for intra-shot consistency. And a RoPE discontinuity indicator is employed to explicitly distinguish the two caches to eliminate ambiguity. Second, to mitigate error accumulation, we propose a two-stage distillation strategy. This begins with intra-shot self-forcing conditioned on ground-truth historical shots and progressively extends to inter-shot self-forcing using self-generated histories, effectively bridging the train-test gap. Extensive experiments demonstrate that ShotStream generates coherent multi-shot videos with sub-second latency, achieving 16 FPS on a single GPU. It matches or exceeds the quality of slower bidirectional models, paving the way for real-time interactive storytelling. Training and inference code, as well as the models, are available on our