On the Internal Representations of Graph Metanetworks

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how graph meta-networks (GMNs) internally construct representations solely from their parameters, aiming to uncover fundamental differences in representational geometry between GMNs and conventional architectures such as MLPs and CNNs. Methodologically, we introduce centered kernel alignment (CKA) — for the first time — into GMN representation analysis, integrating weight-space modeling with graph neural networks to systematically characterize the representational structure embedded in parameter space. Our results demonstrate that GMNs spontaneously organize highly structured, low-dimensional manifold-like representations within parameter space, markedly outperforming MLPs and CNNs in representational efficiency and coherence. This finding provides the first quantifiable and interpretable empirical and theoretical foundation for the “weights-as-representations” learning paradigm. Consequently, our study advances mechanistic understanding and interpretability research in parameter-space learning.

Technology Category

Application Category

📝 Abstract
Weight space learning is an emerging paradigm in the deep learning community. The primary goal of weight space learning is to extract informative features from a set of parameters using specially designed neural networks, often referred to as emph{metanetworks}. However, it remains unclear how these metanetworks learn solely from parameters. To address this, we take the first step toward understanding emph{representations} of metanetworks, specifically graph metanetworks (GMNs), which achieve state-of-the-art results in this field, using centered kernel alignment (CKA). Through various experiments, we reveal that GMNs and general neural networks ( extit{e.g.,} multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs)) differ in terms of their representation space.
Problem

Research questions and friction points this paper is trying to address.

Understanding how graph metanetworks learn from parameters.
Exploring internal representations of graph metanetworks using CKA.
Comparing representation spaces of GMNs and traditional neural networks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph metanetworks analyze weight space learning.
Centered kernel alignment reveals representation differences.
GMNs outperform MLPs and CNNs in representation.
🔎 Similar Papers
No similar papers found.
Taesun Yeom
Taesun Yeom
POSTECH
Deep Learning
J
Jaeho Lee
Pohang University of Science and Technology (POSTECH)