Weisfeiler and Leman Go Gambling: Why Expressive Lottery Tickets Win

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the Lottery Ticket Hypothesis (LTH) in Graph Neural Networks (GNNs), focusing on the expressive power of sparse subnetworks for distinguishing non-isomorphic graphs—a key determinant of high-performing “winning tickets.” We formally propose and rigorously prove the **Strong Expressive Lottery Hypothesis**, establishing a theoretical framework grounded in alignment with the Weisfeiler–Leman (WL) graph isomorphism test. We derive a decidable criterion for the expressive capacity of sparsely initialized GNNs and prove that highly expressive tickets accelerate convergence and improve generalization. Empirical validation on drug discovery tasks demonstrates that WL-aligned sparse subnetworks maintain—or even surpass—the performance of dense GNNs while drastically reducing parameter count. Our results provide a principled theoretical foundation for efficient GNN pruning and training.

Technology Category

Application Category

📝 Abstract
The lottery ticket hypothesis (LTH) is well-studied for convolutional neural networks but has been validated only empirically for graph neural networks (GNNs), for which theoretical findings are largely lacking. In this paper, we identify the expressivity of sparse subnetworks, i.e. their ability to distinguish non-isomorphic graphs, as crucial for finding winning tickets that preserve the predictive performance. We establish conditions under which the expressivity of a sparsely initialized GNN matches that of the full network, particularly when compared to the Weisfeiler-Leman test, and in that context put forward and prove a Strong Expressive Lottery Ticket Hypothesis. We subsequently show that an increased expressivity in the initialization potentially accelerates model convergence and improves generalization. Our findings establish novel theoretical foundations for both LTH and GNN research, highlighting the importance of maintaining expressivity in sparsely initialized GNNs. We illustrate our results using examples from drug discovery.
Problem

Research questions and friction points this paper is trying to address.

Theoretical validation of lottery ticket hypothesis for graph neural networks
Conditions for sparse GNNs to match full network expressivity
Impact of expressivity on GNN convergence and generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse subnetworks enhance GNN expressivity
Strong Expressive Lottery Ticket Hypothesis proven
Increased expressivity improves convergence and generalization
🔎 Similar Papers
No similar papers found.