🤖 AI Summary
To address performance degradation in out-of-distribution (OOD) generalization of graph data—caused by structural complexity and diverse distribution shifts—this paper proposes a Dynamic Multi-Invariant Subgraph Aggregation (DMISA) framework. Unlike existing methods relying on a single invariant subgraph, DMISA jointly models multiple causally consistent subgraph patterns via a learnable, dynamic MLP-based prioritized aggregation mechanism. It integrates causal modeling, an information-theoretic objective, diversity-aware subgraph sampling, and regularization to achieve robust nonlinear fusion. Extensive experiments across 15 standard graph OOD benchmarks—including DrugOOD—demonstrate that DMISA improves average classification accuracy by up to 5% over state-of-the-art baselines, significantly enhancing model adaptability and generalization under heterogeneous distribution shifts.
📝 Abstract
Recent work has extended the invariance principle for out-of-distribution (OOD) generalization from Euclidean to graph data, where challenges arise due to complex structures and diverse distribution shifts in node attributes and topology. To handle these, Chen et al. proposed CIGA (Chen et al., 2022b), which uses causal modeling and an information-theoretic objective to extract a single invariant subgraph capturing causal features. However, this single-subgraph focus can miss multiple causal patterns. Liu et al. (2025) addressed this with SuGAr, which learns and aggregates diverse invariant subgraphs via a sampler and diversity regularizer, improving robustness but still relying on simple uniform or greedy aggregation. To overcome this, the proposed PISA framework introduces a dynamic MLP-based aggregation that prioritizes and combines subgraph representations more effectively. Experiments on 15 datasets, including DrugOOD (Ji et al., 2023), show that PISA achieves up to 5% higher classification accuracy than prior methods.