MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance Fields

📅 2022-12-16
🏛️ IEEE Access
📈 Citations: 10
Influential: 3
📄 PDF
🤖 AI Summary
NeRF faces catastrophic forgetting and memory intractability in incremental learning on edge devices and large-scale scenes. To address this, we propose a self-distillation-based incremental learning framework that leverages the NeRF itself as a memory carrier: historical pixel observations are regenerated via intelligent ray sampling—eliminating explicit storage of past data. Our method integrates a geometry-aware distillation loss based on ray queries, incremental optimization, and memory-aware parameter updates, enabling continual learning with constant memory footprint. It requires no external memory modules or replay mechanisms, achieving O(1) memory complexity while substantially mitigating forgetting—PSNR and other metrics closely approach those of full retraining. The core contribution is the first deep integration of NeRF’s implicit field representation with self-distillation, establishing an efficient and practical paradigm for continual neural rendering under resource constraints.
📝 Abstract
Hinged on the representation power of neural networks, neural radiance fields (NeRF) have recently emerged as one of the promising and widely applicable methods for 3D object and scene representation. However, NeRF faces challenges in practical applications, such as large-scale scenes and edge devices with a limited amount of memory, where data needs to be processed sequentially. Under such incremental learning scenarios, neural networks are known to suffer catastrophic forgetting: easily forgetting previously seen data after training with new data. We observe that previous incremental learning algorithms are limited by either low performance or memory scalability issues. As such, we develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF). MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that returns pixel RGB values, given rays as queries. Upon the motivation, our framework learns which rays to query NeRF to extract previous pixel values. The extracted pixel values are then used to train NeRF in a self-distillation manner to prevent catastrophic forgetting. As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in incremental NeRF learning
Solves memory scalability for large-scale NeRF applications
Enables sequential data processing with constant memory usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory-efficient incremental learning algorithm
Self-distillation training with extracted pixels
Constant memory consumption competitive performance
🔎 Similar Papers
No similar papers found.