🤖 AI Summary
Existing NeRF quantization methods suffer from a dual disconnect with hardware accelerators: quantization strategies are not co-optimized with hardware architecture, while accelerator design relies on manual exploration—resulting in low efficiency and suboptimal performance. This paper introduces the first hardware-aware NeRF quantization optimization framework, tightly integrating reinforcement learning with a configurable NeRF accelerator simulator to jointly and automatically search for quantization policies that balance model accuracy and hardware metrics (latency, area cost, and model size). The framework overcomes manual design bottlenecks by enabling end-to-end, constraint-driven quantization policy generation. Experiments demonstrate that our approach achieves 1.31–1.33× lower latency and 1.29–1.33× higher cost-efficiency compared to the state-of-the-art CAQ method, while significantly reducing model size.
📝 Abstract
Neural Radiance Field (NeRF) has emerged as a promising 3D reconstruction method, delivering high-quality results for AR/VR applications. While quantization methods and hardware accelerators have been proposed to enhance NeRF's computational efficiency, existing approaches face crucial limitations. Current quantization methods operate without considering hardware architecture, resulting in sub-optimal solutions within the vast design space encompassing accuracy, latency, and model size. Additionally, existing NeRF accelerators heavily rely on human experts to explore this design space, making the optimization process time-consuming, inefficient, and unlikely to discover optimal solutions. To address these challenges, we introduce HERO, a reinforcement learning framework performing hardware-aware quantization for NeRF. Our framework integrates a NeRF accelerator simulator to generate real-time hardware feedback, enabling fully automated adaptation to hardware constraints. Experimental results demonstrate that HERO achieves 1.31-1.33 $ imes$ better latency, 1.29-1.33 $ imes$ improved cost efficiency, and a more compact model size compared to CAQ, a previous state-of-the-art NeRF quantization framework. These results validate our framework's capability to effectively navigate the complex design space between hardware and algorithm requirements, discovering superior quantization policies for NeRF implementation. Code is available at https://github.com/ypzhng/HERO.