🤖 AI Summary
This work addresses the high computational cost and inter-view redundancy inherent in real-time rendering of radiance fields (e.g., NeRF, 3D Gaussians) on light field displays. Methodologically, it proposes a general, retraining-free rendering framework that employs a single-plane scanning strategy, integrates non-directional component caching with multi-view parallelization, and supports diverse radiance field representations uniformly; lightweight interpolation and deferred update mechanisms further eliminate redundant per-view computation. The key contributions are: (1) the first end-to-end real-time mapping from radiance fields to light field displays, enabling cross-model generalization without retraining; and (2) state-of-the-art performance on the Looking Glass display—achieving over 200 FPS at 512p resolution with 45 views, a 22× speedup over naive per-view rendering, with no perceptible degradation in visual quality.
📝 Abstract
Radiance fields have revolutionized photo-realistic 3D scene visualization by enabling high-fidelity reconstruction of complex environments, making them an ideal match for light field displays. However, integrating these technologies presents significant computational challenges, as light field displays require multiple high-resolution renderings from slightly shifted viewpoints, while radiance fields rely on computationally intensive volume rendering. In this paper, we propose a unified and efficient framework for real-time radiance field rendering on light field displays. Our method supports a wide range of radiance field representations, including NeRFs, 3D Gaussian Splatting, and Sparse Voxels, within a shared architecture based on a single-pass plane sweeping strategy and caching of shared, non-directional components. The framework generalizes across different scene formats without retraining, and avoids redundant computation across views. We further demonstrate a real-time interactive application on a Looking Glass display, achieving 200+ FPS at 512p across 45 views, enabling seamless, immersive 3D interaction. On standard benchmarks, our method achieves up to 22x speedup compared to independently rendering each view, while preserving image quality.