Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields

📅 2023-07-24
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in local appearance editing of dynamic Neural Radiance Fields (NeRFs): low spatial precision, temporal inconsistency, and high user expertise requirements. We propose a single-frame-driven interactive editing method. Our contributions are: (1) a plug-and-play local surface representation enabling spatially precise region localization and modeling; (2) an invertible motion mapping network that jointly learns explicit motion representations and appearance edits, ensuring inter-frame temporal consistency; and (3) a dynamic NeRF fusion rendering framework supervised solely on a single input frame, enabling intuitive, no-code editing by non-expert users directly on any training frame. Experiments demonstrate that our method achieves strong locality, high temporal stability, and seamless compatibility with mainstream dynamic NeRF architectures across diverse dynamic scenes.
📝 Abstract
Recently, the editing of neural radiance fields (NeRFs) has gained considerable attention, but most prior works focus on static scenes while research on the appearance editing of dynamic scenes is relatively lacking. In this paper, we propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video. Specifically, to locally edit the appearance of dynamic NeRFs while preserving unedited regions, we introduce a local surface representation of the edited region, which can be inserted into and rendered along with the original NeRF and warped to arbitrary other frames through a learned invertible motion representation network. By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene. We extensively evaluate our approach on various scenes and show that our approach achieves spatially and temporally consistent editing results. Notably, our approach is versatile and applicable to different variants of dynamic NeRF representations.
Problem

Research questions and friction points this paper is trying to address.

Editing local appearance in dynamic NeRFs
Preserving unedited regions during dynamic edits
Achieving consistent edits across frames
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local surface representation for dynamic NeRF editing
Invertible motion network for temporal consistency
Single-frame pixel manipulation for user-friendly editing
🔎 Similar Papers
No similar papers found.