🤖 AI Summary
Neural fields (NFs) lack efficient encoding methods for lightweight dynamic updates. Method: This paper introduces Low-Rank Adaptation (LoRA) to the neural field domain for the first time, proposing a parameter-efficient and computationally lightweight framework for instance-level incremental editing. Unlike prior approaches, it does not rely on large pre-trained models; instead, it employs low-rank matrix decomposition to achieve compact parameter representation and rapid fine-tuning of neural fields, supporting multimodal tasks including image filtering, video compression, and geometric editing. Contribution/Results: Experiments demonstrate that the method reduces parameter count and GPU memory consumption by over 90% compared to baseline methods, enables real-time updates on resource-constrained devices, and maintains high-fidelity reconstruction quality. This work establishes a novel paradigm for personalized neural field adaptation and edge deployment.
📝 Abstract
Processing visual data often involves small adjustments or sequences of changes, such as in image filtering, surface smoothing, and video storage. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields (NF) -- neural network parameterizations of visual or physical functions -- has received less attention. We propose a parameter-efficient strategy for updating neural fields using low-rank adaptations (LoRA). LoRA, a method from the parameter-efficient fine-tuning LLM community, encodes small updates to pre-trained models with minimal computational overhead. We adapt LoRA to instance-specific neural fields, avoiding the need for large pre-trained models yielding a pipeline suitable for low-compute hardware. We validate our approach with experiments in image filtering, video compression, and geometry editing, demonstrating its effectiveness and versatility for representing neural field updates.