Mono4DEditor: Text-Driven 4D Scene Editing from Monocular Video via Point-Level Localization of Language-Embedded Gaussians

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of text-driven fine-grained local editing for monocular video-based 4D dynamic scene reconstruction—a previously unexplored task. To overcome inaccurate semantic localization and distortion in unedited regions, we propose the first method that tightly couples CLIP language embeddings with 3D Gaussian splatting representations and introduces a two-stage point-level localization mechanism for precise spatial querying and semantic region refinement. Furthermore, we integrate optical flow alignment, interactive scribble guidance, and diffusion-based synthesis under spatiotemporal consistency constraints to achieve high-fidelity editing. Extensive experiments across diverse complex dynamic scenes demonstrate that our approach significantly outperforms existing methods, achieving breakthroughs in editing flexibility, geometric and appearance fidelity, and semantic alignment accuracy.

Technology Category

Application Category

📝 Abstract
Editing 4D scenes reconstructed from monocular videos based on text prompts is a valuable yet challenging task with broad applications in content creation and virtual environments. The key difficulty lies in achieving semantically precise edits in localized regions of complex, dynamic scenes, while preserving the integrity of unedited content. To address this, we introduce Mono4DEditor, a novel framework for flexible and accurate text-driven 4D scene editing. Our method augments 3D Gaussians with quantized CLIP features to form a language-embedded dynamic representation, enabling efficient semantic querying of arbitrary spatial regions. We further propose a two-stage point-level localization strategy that first selects candidate Gaussians via CLIP similarity and then refines their spatial extent to improve accuracy. Finally, targeted edits are performed on localized regions using a diffusion-based video editing model, with flow and scribble guidance ensuring spatial fidelity and temporal coherence. Extensive experiments demonstrate that Mono4DEditor enables high-quality, text-driven edits across diverse scenes and object types, while preserving the appearance and geometry of unedited areas and surpassing prior approaches in both flexibility and visual fidelity.
Problem

Research questions and friction points this paper is trying to address.

Enables text-driven editing of 4D scenes from monocular videos
Achieves precise semantic localization in complex dynamic scenes
Maintains temporal coherence while preserving unedited content integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-embedded Gaussians enable semantic spatial querying
Two-stage point-level localization refines spatial edit accuracy
Diffusion-based editing with flow guidance ensures temporal coherence
🔎 Similar Papers
No similar papers found.
J
Jin-Chuan Shi
Zhejiang University, China
C
Chengye Su
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, China
J
Jiajun Wang
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, China
Ariel Shamir
Ariel Shamir
Professor of Computer Science, Reichman University (IDC Herzliya)
Computer GraphicsImage ProcessingGeometric ModelingMachine Learning
M
Miao Wang
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, China