🤖 AI Summary
Existing neural field research primarily focuses on scene representation (e.g., neural points, 3D Gaussians) while neglecting optimization of the rendering process itself. To address this, we propose K-Buffers—a plug-and-play method that introduces multi-buffer parallel rendering into neural fields for the first time, decoupling rendering enhancement from scene representation. For each pixel, our method generates K buffer-wise feature maps, which are fused via a lightweight K-Feature Fusion Network (KFN) and subsequently decoded into high-fidelity images. We further design dedicated acceleration strategies to improve inference efficiency. Crucially, K-Buffers requires no modification to the underlying representation and is compatible with both neural point fields and 3D Gaussian Splatting (3DGS). Experiments demonstrate significant improvements in PSNR and SSIM over baseline methods, alongside accelerated inference—validating the approach’s generality, effectiveness, and practicality.
📝 Abstract
Neural fields are now the central focus of research in 3D vision and computer graphics. Existing methods mainly focus on various scene representations, such as neural points and 3D Gaussians. However, few works have studied the rendering process to enhance the neural fields. In this work, we propose a plug-in method named K-Buffers that leverages multiple buffers to improve the rendering performance. Our method first renders K buffers from scene representations and constructs K pixel-wise feature maps. Then, We introduce a K-Feature Fusion Network (KFN) to merge the K pixel-wise feature maps. Finally, we adopt a feature decoder to generate the rendering image. We also introduce an acceleration strategy to improve rendering speed and quality. We apply our method to well-known radiance field baselines, including neural point fields and 3D Gaussian Splatting (3DGS). Extensive experiments demonstrate that our method effectively enhances the rendering performance of neural point fields and 3DGS.