On the Necessity of Learnable Sheaf Laplacians

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether mitigating over-smoothing on heterophilic graphs necessitates learnable restriction maps in the construction of Sheaf Laplacians. To this end, we propose the Identity Sheaf Network (ISN) as a baseline model employing a fixed identity map and conduct systematic ablation studies across five established heterophilic graph benchmarks. By introducing a normalized Rayleigh quotient metric, we empirically analyze the over-smoothing behavior of various Sheaf structures. Our results demonstrate that ISN achieves performance comparable to multiple learnable Sheaf networks without exacerbating over-smoothing, thereby challenging the presumed necessity of learnable restriction maps in current Sheaf-based neural architectures and offering a new perspective toward simplifying Sheaf design.

Technology Category

Application Category

📝 Abstract
Sheaf Neural Networks (SNNs) were introduced as an extension of Graph Convolutional Networks to address oversmoothing on heterophilous graphs by attaching a sheaf to the input graph and replacing the adjacency-based operator with a sheaf Laplacian defined by (learnable) restriction maps. Prior work motivates this design through theoretical properties of sheaf diffusion and the kernel of the sheaf Laplacian, suggesting that suitable non-identity restriction maps can avoid representations converging to constants across connected components. Since oversmoothing can also be mitigated through residual connections and normalization, we revisit a trivial sheaf construction to ask whether the additional complexity of learning restriction maps is necessary. We introduce an Identity Sheaf Network baseline, where all restriction maps are fixed to the identity, and use it to ablate the empirical improvements reported by sheaf-learning architectures. Across five popular heterophilic benchmarks, the identity baseline achieves comparable performance to a range of SNN variants. Finally, we introduce the Rayleigh quotient as a normalized measure for comparing oversmoothing across models and show that, in trained networks, the behavior predicted by the diffusion-based analysis of SNNs is not reflected empirically. In particular, Identity Sheaf Networks do not appear to suffer more significant oversmoothing than their SNN counterparts.
Problem

Research questions and friction points this paper is trying to address.

oversmoothing
heterophilous graphs
Sheaf Neural Networks
sheaf Laplacian
restriction maps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sheaf Neural Networks
learnable sheaf Laplacians
oversmoothing
heterophilous graphs
Rayleigh quotient
🔎 Similar Papers
No similar papers found.
F
Ferran Hernandez Caralt
Department of Computer Science and Technology, University of Cambridge
M
Mar Gonzàlez i Català
Department of Computer Science and Technology, University of Cambridge
Pietro Liò
Pietro Liò
Professor, University of Cambridge
AI & Comp Biology -> Medicine
Adrián Bazaga
Adrián Bazaga
Senior Research Scientist, Microsoft | PhD, University of Cambridge
Deep LearningLanguage ModelsMultimodalityGenerative ModelsApplied AI