๐ค AI Summary
Existing graph pooling methods predominantly rely on coarse-grained node pruning based on node degree, neglecting feature-level relevance and task-specific semantics, leading to substantial information loss. To address this, we propose a multi-view node pruning framework: first, constructing multiple graph views via modality partitioning or random feature splitting; second, introducing a differentiable node importance scoring mechanism jointly optimized by task-aware loss and reconstruction lossโenabling fine-grained, task-guided, and feature-sensitive evaluation. This work is the first to integrate multi-view learning and reconstruction loss into graph node pruning, decoupling importance estimation from topological constraints inherent in conventional attention-based approaches. The method is plug-and-play and fully compatible with hierarchical pooling architectures. Extensive experiments on multiple benchmark datasets demonstrate significant performance gains over two mainstream pooling baselines and consistent superiority over existing pruning methods. Ablation studies validate that both multi-view encoding and reconstruction loss are indispensable and effective components.
๐ Abstract
Graph pooling, which compresses a whole graph into a smaller coarsened graph, is an essential component of graph representation learning. To efficiently compress a given graph, graph pooling methods often drop their nodes with attention-based scoring with the task loss. However, this often results in simply removing nodes with lower degrees without consideration of their feature-level relevance to the given task. To fix this problem, we propose a Multi-View Pruning(MVP), a graph pruning method based on a multi-view framework and reconstruction loss. Given a graph, MVP first constructs multiple graphs for different views either by utilizing the predefined modalities or by randomly partitioning the input features, to consider the importance of each node in diverse perspectives. Then, it learns the score for each node by considering both the reconstruction and the task loss. MVP can be incorporated with any hierarchical pooling framework to score the nodes. We validate MVP on multiple benchmark datasets by coupling it with two graph pooling methods, and show that it significantly improves the performance of the base graph pooling method, outperforming all baselines. Further analysis shows that both the encoding of multiple views and the consideration of reconstruction loss are the key to the success of MVP, and that it indeed identifies nodes that are less important according to domain knowledge.