๐ค AI Summary
Existing graph distillation methods rely on full-graph training, lack support for model or hyperparameter modifications, and achieve low compression ratiosโseverely limiting flexibility and reusability. This paper proposes the first gradient-free, linear-time, model-agnostic graph distillation framework for node classification. Our core innovations are: (1) a distillation paradigm grounded in computational tree modeling and exemplar tree sampling; (2) a structure-aware graph synthesis mechanism that generates sparse yet semantically preserved distilled graphs; and (3) rigorous theoretical guarantees ensuring compatibility with arbitrary GNN architectures and hyperparameter configurations. Evaluated on six real-world datasets, our method achieves superior average accuracy over all baselines, accelerates inference by 22ร, and significantly enhances efficiency, generalization, and deployment robustness.
๐ Abstract
Graph distillation has emerged as a promising avenue to enable scalable training of GNNs by compressing the training dataset while preserving essential graph characteristics. Our study uncovers significant shortcomings in current graph distillation techniques. First, the majority of the algorithms paradoxically require training on the full dataset to perform distillation. Second, due to their gradient-emulating approach, these methods require fresh distillation for any change in hyperparameters or GNN architecture, limiting their flexibility and reusability. Finally, they fail to achieve substantial size reduction due to synthesizing fully-connected, edge-weighted graphs. To address these challenges, we present Bonsai, a novel graph distillation method empowered by the observation that extit{computation trees} form the fundamental processing units of message-passing GNNs. Bonsai distills datasets by encoding a careful selection of extit{exemplar} trees that maximize the representation of all computation trees in the training set. This unique approach imparts Bonsai as the first linear-time, model-agnostic graph distillation algorithm for node classification that outperforms existing baselines across $6$ real-world datasets on accuracy, while being $22$ times faster on average. Bonsai is grounded in rigorous mathematical guarantees on the adopted approximation strategies making it robust to GNN architectures, datasets, and parameters.