🤖 AI Summary
Real-time Neural Radiance Fields (NeRF) training on AR/VR edge devices faces dual challenges—achieving millisecond-level 3D reconstruction latency while adhering to stringent power constraints (≤1.9 W).
Method: This paper proposes the first edge-native instantaneous NeRF reconstruction framework, featuring a novel color-and-density-decoupled hash-encoded grid representation, coupled with algorithm–hardware co-optimization—including memory-access aggregation, sliding temporal-window gradient merging, and multi-scale compute-kernel fusion.
Contribution/Results: The framework achieves lossless reconstruction quality while accelerating training by 41×–248× over prior methods. It completes single-scene training in just 1.6 seconds and maintains stable power consumption at ≤1.9 W—demonstrating, for the first time, real-time NeRF training feasibility on resource-constrained edge devices.
📝 Abstract
Neural Radiance Field (NeRF) based 3D reconstruction is highly desirable for immersive Augmented and Virtual Reality (AR/VR) applications, but achieving instant (i.e., < 5 seconds) on-device NeRF training remains a challenge. In this work, we first identify the inefficiency bottleneck: the need to interpolate NeRF embeddings up to 200,000 times from a 3D embedding grid during each training iteration. To alleviate this, we propose Instant-3D, an algorithm-hardware co-design acceleration framework that achieves instant on-device NeRF training. Our algorithm decomposes the embedding grid representation in terms of color and density, enabling computational redundancy to be squeezed out by adopting different (1) grid sizes and (2) update frequencies for the color and density branches. Our hardware accelerator further reduces the dominant memory accesses for embedding grid interpolation by (1) mapping multiple nearby points' memory read requests into one during the feed-forward process, (2) merging embedding grid updates from the same sliding time window during back-propagation, and (3) fusing different computation cores to support the different grid sizes needed by the color and density branches of Instant-3D algorithm. Extensive experiments validate the effectiveness of Instant-3D, achieving a large training time reduction of 41× - 248× while maintaining the same reconstruction quality. Excitingly, Instant-3D has enabled instant 3D reconstruction for AR/VR, requiring a reconstruction time of only 1.6 seconds per scene and meeting the AR/VR power consumption constraint of 1.9 W.