🤖 AI Summary
To address insufficient cross-layer dependency modeling and poor scalability in multilayer network link prediction, this paper reformulates the task as a multi-view edge classification problem and proposes the Cross-Layer Self-Attention (CLSA) framework. CLSA constructs edge-view sequences per layer and employs self-attention to dynamically integrate cross-layer structural evidence. It introduces two compatible variants—Trans-SLE (leveraging static embeddings) and Trans-GAT (integrating GNN encoders)—and adopts a leakage-free evaluation protocol with a Union-Set candidate pool to ensure fairness and computational efficiency. Extensive experiments on six public multilayer network datasets demonstrate that CLSA consistently outperforms state-of-the-art baselines—including MELL, HOPLP-MUL, and RMNE—with average macro-F₁ gains of 3.2–9.7 percentage points. The results validate CLSA’s effectiveness, generalizability across diverse network topologies, and scalability to large-scale multilayer graphs.
📝 Abstract
Multiplex graphs capture diverse relations among shared nodes. Most predictors either collapse layers or treat them independently. This loses crucial inter-layer dependencies and struggles with scalability. To overcome this, we frame multiplex link prediction as multi-view edge classification. For each node pair, we construct a sequence of per-layer edge views and apply cross-layer self-attention to fuse evidence for the target layer. We present two models as instances of this framework: Trans-SLE, a lightweight transformer over static embeddings, and Trans-GAT, which combines layer-specific GAT encoders with transformer fusion. To ensure scalability and fairness, we introduce a Union--Set candidate pool and two leakage-free protocols: cross-layer and inductive subgraph generalization. Experiments on six public multiplex datasets show consistent macro-F_1 gains over strong baselines (MELL, HOPLP-MUL, RMNE). Our approach is simple, scalable, and compatible with both precomputed embeddings and GNN encoders.