🤖 AI Summary
To address cross-view appearance inconsistency in Gaussian Splatting—caused by camera ISP pipelines, illumination variations, and weather changes—and manifesting as floating-point artifacts and color distortions, this work proposes, for the first time, decoupling appearance variations at the image level rather than the Gaussian parameter level, thereby implicitly enforcing 3D appearance consistency. Our method introduces a lightweight, image-level transformation network that aggregates appearance features from 3D space and seamlessly integrates into the Gaussian rasterization pipeline, enabling plug-and-play deployment and real-time rendering. Compared to prior approaches, our solution significantly reduces training time and GPU memory consumption while preserving rendering speed. It achieves state-of-the-art visual quality under diverse appearance conditions and remains compatible with various Gaussian rasterization baselines.
📝 Abstract
Gaussian Splatting has emerged as a prominent 3D representation in novel view synthesis, but it still suffers from appearance variations, which are caused by various factors, such as modern camera ISPs, different time of day, weather conditions, and local light changes. These variations can lead to floaters and color distortions in the rendered images/videos. Recent appearance modeling approaches in Gaussian Splatting are either tightly coupled with the rendering process, hindering real-time rendering, or they only account for mild global variations, performing poorly in scenes with local light changes. In this paper, we propose DAVIGS, a method that decouples appearance variations in a plug-and-play and efficient manner. By transforming the rendering results at the image level instead of the Gaussian level, our approach can model appearance variations with minimal optimization time and memory overhead. Furthermore, our method gathers appearance-related information in 3D space to transform the rendered images, thus building 3D consistency across views implicitly. We validate our method on several appearance-variant scenes, and demonstrate that it achieves state-of-the-art rendering quality with minimal training time and memory usage, without compromising rendering speeds. Additionally, it provides performance improvements for different Gaussian Splatting baselines in a plug-and-play manner.