π€ AI Summary
Existing text-driven video editing methods often suffer from physically implausible deformations and temporal inconsistencies when handling non-rigid motions, leading to visual artifacts such as warping and flickering. To address this, this work introduces NRVBench, the first benchmark specifically designed for non-rigid video editing, comprising a high-quality dataset, a fine-grained evaluation metric NRVE-Acc based on vision-language models, and VM-Editβa training-free, structure-aware editing method. VM-Edit employs a dual-region denoising mechanism and is evaluated through a comprehensive protocol integrating physical category annotations and multiple-choice question answering. Experiments reveal significant deficiencies in the physical plausibility of current approaches, while VM-Edit achieves superior performance across both established and newly proposed metrics, demonstrating the effectiveness and advancement of the proposed benchmark.
π Abstract
Despite the remarkable progress in text-driven video editing, generating coherent non-rigid deformations remains a critical challenge, often plagued by physical distortion and temporal flicker. To bridge this gap, we propose NRVBench, the first dedicated and comprehensive benchmark designed to evaluate non-rigid video editing. First, we curate a high-quality dataset consisting of 180 non-rigid motion videos from six physics-based categories, equipped with 2,340 fine-grained task instructions and 360 multiple-choice questions. Second, we propose NRVE-Acc, a novel evaluation metric based on Vision-Language Models that can rigorously assess physical compliance, temporal consistency, and instruction alignment, overcoming the limitations of general metrics in capturing complex dynamics. Third, we introduce a training-free baseline, VM-Edit, which utilizes a dual-region denoising mechanism to achieve structure-aware control, balancing structural preservation and dynamic deformation. Extensive experiments demonstrate that while current methods have shortcomings in maintaining physical plausibility, our method achieves excellent performance across both standard and proposed metrics. We believe the benchmark could serve as a standard testing platform for advancing physics-aware video editing.