Dynamic-eDiTor: Training-Free Text-Driven 4D Scene Editing with Multimodal Diffusion Transformer

๐Ÿ“… 2025-11-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Text-driven 4D scene editing suffers from challenges in preserving multi-view and temporal consistency; existing frame-wise editing methods based on 2D diffusion models often induce motion artifacts, geometric drift, and incomplete edits. To address this, we propose the first zero-shot, text-driven 4D editing framework that integrates a multimodal diffusion Transformer with a 4D Gaussian splatting representation. Our method introduces spatiotemporal sub-grid attention and context token propagation, coupled with optical-flow-guided token replacement, to ensure local geometric coherence and global temporal continuity. Evaluated on the DyNeRF dataset, our approach significantly improves edit fidelity and cross-view, cross-temporal consistency. Notably, it achieves high-fidelity 4D dynamic scene editing without any fine-tuningโ€”marking the first such zero-shot solution for this task.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent progress in 4D representations, such as Dynamic NeRF and 4D Gaussian Splatting (4DGS), has enabled dynamic 4D scene reconstruction. However, text-driven 4D scene editing remains under-explored due to the challenge of ensuring both multi-view and temporal consistency across space and time during editing. Existing studies rely on 2D diffusion models that edit frames independently, often causing motion distortion, geometric drift, and incomplete editing. We introduce Dynamic-eDiTor, a training-free text-driven 4D editing framework leveraging Multimodal Diffusion Transformer (MM-DiT) and 4DGS. This mechanism consists of Spatio-Temporal Sub-Grid Attention (STGA) for locally consistent cross-view and temporal fusion, and Context Token Propagation (CTP) for global propagation via token inheritance and optical-flow-guided token replacement. Together, these components allow Dynamic-eDiTor to perform seamless, globally consistent multi-view video without additional training and directly optimize pre-trained source 4DGS. Extensive experiments on multi-view video dataset DyNeRF demonstrate that our method achieves superior editing fidelity and both multi-view and temporal consistency prior approaches. Project page for results and code: https://di-lee.github.io/dynamic-eDiTor/
Problem

Research questions and friction points this paper is trying to address.

Enables text-driven 4D scene editing without training
Ensures multi-view and temporal consistency across space-time
Avoids motion distortion and geometric drift in editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework using Multimodal Diffusion Transformer
Spatio-Temporal Sub-Grid Attention for local consistency
Context Token Propagation for global editing consistency
๐Ÿ”Ž Similar Papers
No similar papers found.