VIRGi: View-dependent Instant Recoloring of 3D Gaussians Splats

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient and photorealistic appearance editing methods for 3D Gaussian Splatting (3DGS) scenes, which often fail to preserve view-dependent visual effects such as specular highlights. The authors propose a real-time, single-image-guided global recoloring approach that decomposes colors into diffuse and view-dependent components. By leveraging a multi-view image patch training strategy, the method propagates user edits from a single edited image to the entire 3DGS scene in under two seconds, while enabling explicit control over the strength of view dependence. To the best of our knowledge, this is the first view-aware recoloring technique tailored for 3DGS, outperforming NeRF-based methods across multiple datasets and significantly improving both editing quality and reconstruction fidelity.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has recently transformed the fields of novel view synthesis and 3D reconstruction due to its ability to accurately model complex 3D scenes and its unprecedented rendering performance. However, a significant challenge persists: the absence of an efficient and photorealistic method for editing the appearance of the scene's content. In this paper we introduce VIRGi, a novel approach for rapidly editing the color of scenes modeled by 3DGS while preserving view-dependent effects such as specular highlights. Key to our method are a novel architecture that separates color into diffuse and view-dependent components, and a multi-view training strategy that integrates image patches from multiple viewpoints. Improving over the conventional single-view batch training, our 3DGS representation provides more accurate reconstruction and serves as a solid representation for the recoloring task. For 3DGS recoloring, we then introduce a rapid scheme requiring only one manually edited image of the scene from the end-user. By fine-tuning the weights of a single MLP, alongside a module for single-shot segmentation of the editable area, the color edits are seamlessly propagated to the entire scene in just two seconds, facilitating real-time interaction and providing control over the strength of the view-dependent effects. An exhaustive validation on diverse datasets demonstrates significant quantitative and qualitative advancements over competitors based on Neural Radiance Fields representations.
Problem

Research questions and friction points this paper is trying to address.

3D Gaussian Splatting
view-dependent effects
recoloring
appearance editing
novel view synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian Splatting
view-dependent recoloring
multi-view training
real-time appearance editing
diffuse-specular decomposition
🔎 Similar Papers
2024-01-08arXiv.orgCitations: 127
A
Alessio Mazzucchelli
Arquimea Research Center, San Cristóbal de la Laguna, Santa Cruz de Tenerife, 38320 and Universidad Politécnica de Catalunya, Doctoral Degree in Automatic Control, Robotics and Vision, Carrer de Jordi Girona, 31, Les Corts, Barcelona, 08034, Spain
I
Ivan Ojeda-Martin
Arquimea Research Center, San Cristóbal de la Laguna, Santa Cruz de Tenerife, 38320
F
Fernando Rivas-Manzaneque
Volinga AI, San Cristóbal de la Laguna, Santa Cruz de Tenerife, 38320 and Universidad Politécnica de Madrid, Programa de Doctorado en Automática y Robótica, Calle de José Gutiérrez Abascal 2, Madrid, 28006, Spain
Elena Garces
Elena Garces
Adobe
computer graphicscomputer visionmachine learning
Adrian Penate-Sanchez
Adrian Penate-Sanchez
Lecturer at Universidad de Las Palmas de Gran Canaria (ULPGC)
Computer VisionRoboticsMachine Learning
Francesc Moreno-Noguer
Francesc Moreno-Noguer
Amazon Science
Computer VisionDeep Learning