🤖 AI Summary
Real-time rendering of 3D Gaussian Splatting (3DGS) in VR suffers from high latency, while conventional foveated rendering—relying on eye-tracking—often exacerbates latency due to tracking overhead. Method: This paper proposes an efficient, parallel framework that tightly integrates eye-tracking with foveated rendering. It jointly couples incremental gaze prediction and dynamic-resolution foveated rendering, implemented as an end-to-end pipelined execution on GPU to eliminate serial dependencies and associated latency. Built upon 3DGS, the framework adaptively modulates resolution and Gaussian point sampling density within the retinal fovea, preserving perceptual fidelity. Contribution/Results: Experiments demonstrate up to 2× reduction in end-to-end rendering latency while maintaining stable subjective quality across diverse VR scenes. The approach delivers a scalable, system-level solution for high-fidelity, low-latency VR rendering.
📝 Abstract
Virtual reality (VR) significantly transforms immersive digital interfaces, greatly enhancing education, professional practices, and entertainment by increasing user engagement and opening up new possibilities in various industries. Among its numerous applications, image rendering is crucial. Nevertheless, rendering methodologies like 3D Gaussian Splatting impose high computational demands, driven predominantly by user expectations for superior visual quality. This results in notable processing delays for real-time image rendering, which greatly affects the user experience. Additionally, VR devices such as head-mounted displays (HMDs) are intricately linked to human visual behavior, leveraging knowledge from perception and cognition to improve user experience. These insights have spurred the development of foveated rendering, a technique that dynamically adjusts rendering resolution based on the user's gaze direction. The resultant solution, known as gaze-tracked foveated rendering, significantly reduces the computational burden of the rendering process.
Although gaze-tracked foveated rendering can reduce rendering costs, the computational overhead of the gaze tracking process itself can sometimes outweigh the rendering savings, leading to increased processing latency. To address this issue, we propose an efficient rendering framework called~ extit{A3FR}, designed to minimize the latency of gaze-tracked foveated rendering via the parallelization of gaze tracking and foveated rendering processes. For the rendering algorithm, we utilize 3D Gaussian Splatting, a state-of-the-art neural rendering technique. Evaluation results demonstrate that A3FR can reduce end-to-end rendering latency by up to $2 imes$ while maintaining visual quality.