🤖 AI Summary
To address the high texture storage overhead, low per-pixel sampling efficiency, and excessive memory consumption in static global illumination (GI) baking on resource-constrained platforms, this paper proposes a lightweight baking method. It compresses indirect illumination using spherical harmonics and introduces a novel “inverse probe distribution” mechanism: offline constructing a unique mapping from each local-space mesh instance to its nearest light probe, enabling probe-level illumination reuse. The method eliminates the need for auxiliary rendering passes or fragment-level sampling, significantly reducing runtime overhead. Compared to mainstream approaches, it achieves approximately 95% memory reduction—requiring only 5% of the memory—while preserving high-fidelity indirect lighting quality. This work delivers an efficient, high-quality static GI solution tailored for memory- and compute-limited environments such as mobile devices and WebGL applications.
📝 Abstract
Global illumination combines direct and indirect lighting to create realistic lighting effects, bringing virtual scenes closer to reality. Static global illumination is a crucial component of virtual scene rendering, leveraging precomputation and baking techniques to significantly reduce runtime computational costs. Unfortunately, many existing works prioritize visual quality by relying on extensive texture storage and massive pixel-level texture sampling, leading to large performance overhead. In this paper, we introduce an illumination reconstruction method that effectively reduces sampling in fragment shader and avoids additional render passes, making it well-suited for low-end platforms. To achieve high-quality global illumination with reduced memory usage, we adopt a spherical harmonics fitting approach for baking effective illumination information and propose an inverse probe distribution method that generates unique probe associations for each mesh. This association, which can be generated offline in the local space, ensures consistent lighting quality across all instances of the same mesh. As a consequence, our method delivers highly competitive lighting effects while using only approximately 5% of the memory required by mainstream industry techniques.