🤖 AI Summary
This work investigates the potential of neural network weights as structured representations, specifically addressing semantic modeling in the weight space of neural fields. To this end, we propose a multiplicative low-rank adaptation (LoRA) mechanism that imposes structured constraints on pretrained neural field backbone models, enabling the weights themselves to encode interpretable semantic information. Crucially, our method achieves this without introducing additional parameters, yielding high-quality, compact, and generalizable representations directly in weight space. Experimental results demonstrate substantial improvements over existing weight-space approaches across 2D/3D reconstruction and generation tasks. Our method supports fine-grained semantic editing and cross-task transferability. Furthermore, when integrated into latent diffusion frameworks, it enhances both generation fidelity and controllability.
📝 Abstract
In this work, we investigate the potential of weights to serve as effective representations, focusing on neural fields. Our key insight is that constraining the optimization space through a pre-trained base model and low-rank adaptation (LoRA) can induce structure in weight space. Across reconstruction, generation, and analysis tasks on 2D and 3D data, we find that multiplicative LoRA weights achieve high representation quality while exhibiting distinctiveness and semantic structure. When used with latent diffusion models, multiplicative LoRA weights enable higher-quality generation than existing weight-space methods.