π€ AI Summary
Text-driven long-video editing is severely constrained by GPU memory overhead, making it infeasible to process videos exceeding hundreds of frames. To address this, we propose a training-free two-stage framework. In the first stage, we introduce an adaptive attention pruning mechanism that dynamically compresses key-value (KV) sequences to expand the capacity of keyframes. In the second stage, we design a data-driven keyframe selection strategy that jointly optimizes semantic representativeness and temporal coherence, integrating token-level importance scoring, multi-scale inter-frame similarity modeling, and interpolation-based refinement. We further construct LongV-EVALβthe first high-quality benchmark for long-video editing. Evaluated on A800 GPUs, our method enables single-pass inference for videos exceeding 1,000 frames (minute-scale), achieving a 10Γ longer video length than TokenFlow while delivering significantly improved generation quality.
π Abstract
Despite great progress, text-driven long video editing is still notoriously challenging mainly due to excessive memory overhead. Although recent efforts have simplified this task into a two-step process of keyframe translation and interpolation generation, the token-wise keyframe translation still plagues the upper limit of video length. In this paper, we propose a novel and training-free approach towards efficient and effective long video editing, termed AdaFlow. We first reveal that not all tokens of video frames hold equal importance for keyframe translation, based on which we propose an Adaptive Attention Slimming scheme for AdaFlow to squeeze the $KV$ sequence, thus increasing the number of keyframes for translations by an order of magnitude. In addition, an Adaptive Keyframe Selection scheme is also equipped to select the representative frames for joint editing, further improving generation quality. With these innovative designs, AdaFlow achieves high-quality long video editing of minutes in one inference, i.e., more than 1$k$ frames on one A800 GPU, which is about ten times longer than the compared methods, e.g., TokenFlow. To validate AdaFlow, we also build a new benchmark for long video editing with high-quality annotations, termed LongV-EVAL. Our code is released at: https://github.com/jidantang55/AdaFlow.