🤖 AI Summary
This work addresses the high computational cost of existing trajectory-controllable video generation methods, which rely on multi-step denoising, and the significant degradation in video quality and trajectory accuracy caused by direct distillation. To overcome these limitations, the authors propose a novel training framework that first trains a trajectory adapter on a multi-step generator and then distills it into a few-step variant, followed by a hybrid fine-tuning strategy combining diffusion and adversarial objectives to optimize the adapter. This approach establishes a new paradigm that integrates distillation with hybrid fine-tuning, achieving— for the first time under few-step conditions—simultaneously high visual fidelity and precise trajectory control. Evaluated on the newly introduced FlashBench benchmark, the method outperforms both existing distillation approaches and multi-step models in terms of visual quality and trajectory consistency.
📝 Abstract
Recent advances in trajectory-controllable video generation have achieved remarkable progress. Previous methods mainly use adapter-based architectures for precise motion control along predefined trajectories. However, all these methods rely on a multi-step denoising process, leading to substantial time redundancy and computational overhead. While existing video distillation methods successfully distill multi-step generators into few-step, directly applying these approaches to trajectory-controllable video generation results in noticeable degradation in both video quality and trajectory accuracy. To bridge this gap, we introduce FlashMotion, a novel training framework designed for few-step trajectory-controllable video generation. We first train a trajectory adapter on a multi-step video generator for precise trajectory control. Then, we distill the generator into a few-step version to accelerate video generation. Finally, we finetune the adapter using a hybrid strategy that combines diffusion and adversarial objectives, aligning it with the few-step generator to produce high-quality, trajectory-accurate videos. For evaluation, we introduce FlashBench, a benchmark for long-sequence trajectory-controllable video generation that measures both video quality and trajectory accuracy across varying numbers of foreground objects. Experiments on two adapter architectures show that FlashMotion surpasses existing video distillation methods and previous multi-step models in both visual quality and trajectory consistency.