NNiT: Width-Agnostic Neural Network Generation with Structurally Aligned Weight Spaces

πŸ“… 2026-02-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing methods for neural network parameter generation are constrained by fixed architectures, weight dimensions, and permutation symmetries, hindering generalization across diverse architectures. This work proposes a width-agnostic network generation framework that overcomes these limitations by partitioning weight matrices into structured local fields. For the first time, it jointly models discrete architectural tokens and continuous weight blocks within a unified sequence model. The approach integrates a graph hypernetwork, a CNN decoder, and a patch-based weight representation to achieve structural alignment in weight space. Evaluated on the ManiSkill3 robotic manipulation benchmark, the method achieves over 85% success rate in generating functional weights for unseen architectural topologies, substantially outperforming baseline approaches that fail to generalize beyond their training architectures.

Technology Category

Application Category

πŸ“ Abstract
Generative modeling of neural network parameters is often tied to architectures because standard parameter representations rely on known weight-matrix dimensions. Generation is further complicated by permutation symmetries that allow networks to model similar input-output functions while having widely different, unaligned parameterizations. In this work, we introduce Neural Network Diffusion Transformers (NNiTs), which generate weights in a width-agnostic manner by tokenizing weight matrices into patches and modeling them as locally structured fields. We establish that Graph HyperNetworks (GHNs) with a convolutional neural network (CNN) decoder structurally align the weight space, creating the local correlation necessary for patch-based processing. Focusing on MLPs, where permutation symmetry is especially apparent, NNiT generates fully functional networks across a range of architectures. Our approach jointly models discrete architecture tokens and continuous weight patches within a single sequence model. On ManiSkill3 robotics tasks, NNiT achieves >85% success on architecture topologies unseen during training, while baseline approaches fail to generalize.
Problem

Research questions and friction points this paper is trying to address.

neural network generation
width-agnostic
permutation symmetry
weight space alignment
architecture generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

width-agnostic generation
structured weight alignment
patch-based weight modeling
permutation symmetry
Neural Network Diffusion Transformers
πŸ”Ž Similar Papers
No similar papers found.
Jiwoo Kim
Jiwoo Kim
μ„±κ· κ΄€λŒ€ν•™κ΅ 인곡지λŠ₯ν•™κ³Ό
S
Swarajh Mehta
Department of Computer Science, Duke University, NC, USA
H
Hao-Lun Hsu
Department of Computer Science, Duke University, NC, USA
Hyunwoo Ryu
Hyunwoo Ryu
MIT
Artificial IntelligenceRobotics
Y
Yudong Liu
Department of Electrical and Computer Engineering, Duke University, NC, USA
M
Miroslav Pajic
Department of Electrical and Computer Engineering, Duke University, NC, USA