Embed Any NeRF: Graph Meta-Networks for Neural Tasks on Arbitrary NeRF Architectures

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NeRF frameworks are constrained by fixed, pre-defined network architectures—such as MLPs or tri-planar representations—limiting generalization to unseen heterogeneous architectures and preventing cross-architecture weight reuse. Method: We propose the first architecture-agnostic NeRF weight embedding framework: modeling NeRF weights as graphs, designing a Graph Meta-Network to extract topology-aware representations, and incorporating contrastive learning to construct a transferable latent space. Contribution/Results: Our approach enables unified representation and downstream task support (e.g., classification, retrieval) across arbitrary NeRF architectures. Experiments demonstrate performance on par with or surpassing architecture-specific baselines, achieving—for the first time—seamless integration and cross-architecture inference for heterogeneous NeRFs including MLPs and tri-planar networks. This establishes a foundation for modular, reusable NeRF components and facilitates large-scale NeRF model management and deployment.

Technology Category

Application Category

📝 Abstract
Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent works have shown how such weights can be used as input to frameworks processing them to solve deep learning tasks. Yet, these frameworks can only process NeRFs with a specific, predefined architecture. In this paper, we present the first framework that can ingest NeRFs with multiple architectures and perform inference on architectures unseen at training time. We achieve this goal by training a Graph Meta-Network in a representation learning framework. Moreover, we show how a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments on both MLP-based and tri-planar NeRFs, our approach demonstrates robust performance in classification and retrieval tasks that either matches or exceeds that of existing frameworks constrained to single architectures, thus providing the first architecture-agnostic method to perform tasks on NeRFs by processing their weights.
Problem

Research questions and friction points this paper is trying to address.

Process NeRFs with multiple architectures
Perform inference on unseen architectures
Achieve architecture-agnostic latent space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Meta-Network framework
Architecture-agnostic latent space
Contrastive objective learning
🔎 Similar Papers
No similar papers found.