Mind the Links: Cross-Layer Attention for Link Prediction in Multiplex Networks

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient cross-layer dependency modeling and poor scalability in multilayer network link prediction, this paper reformulates the task as a multi-view edge classification problem and proposes the Cross-Layer Self-Attention (CLSA) framework. CLSA constructs edge-view sequences per layer and employs self-attention to dynamically integrate cross-layer structural evidence. It introduces two compatible variants—Trans-SLE (leveraging static embeddings) and Trans-GAT (integrating GNN encoders)—and adopts a leakage-free evaluation protocol with a Union-Set candidate pool to ensure fairness and computational efficiency. Extensive experiments on six public multilayer network datasets demonstrate that CLSA consistently outperforms state-of-the-art baselines—including MELL, HOPLP-MUL, and RMNE—with average macro-F₁ gains of 3.2–9.7 percentage points. The results validate CLSA’s effectiveness, generalizability across diverse network topologies, and scalability to large-scale multilayer graphs.

Technology Category

Application Category

📝 Abstract
Multiplex graphs capture diverse relations among shared nodes. Most predictors either collapse layers or treat them independently. This loses crucial inter-layer dependencies and struggles with scalability. To overcome this, we frame multiplex link prediction as multi-view edge classification. For each node pair, we construct a sequence of per-layer edge views and apply cross-layer self-attention to fuse evidence for the target layer. We present two models as instances of this framework: Trans-SLE, a lightweight transformer over static embeddings, and Trans-GAT, which combines layer-specific GAT encoders with transformer fusion. To ensure scalability and fairness, we introduce a Union--Set candidate pool and two leakage-free protocols: cross-layer and inductive subgraph generalization. Experiments on six public multiplex datasets show consistent macro-F_1 gains over strong baselines (MELL, HOPLP-MUL, RMNE). Our approach is simple, scalable, and compatible with both precomputed embeddings and GNN encoders.
Problem

Research questions and friction points this paper is trying to address.

Predicting links in multiplex networks using cross-layer attention
Addressing inter-layer dependency loss in existing prediction methods
Ensuring scalability and fairness with leakage-free generalization protocols
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses cross-layer self-attention to fuse multiplex edge evidence
Combines GAT encoders with transformer fusion for layers
Introduces leakage-free protocols for scalable generalization
🔎 Similar Papers
No similar papers found.