Type-Less yet Type-Aware Inductive Link Prediction with Pretrained Language Models

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Inductive link prediction in knowledge graphs faces challenges from frequent emergence of novel entities and the scarcity—or absence—of explicit, fine-grained type annotations (often limited to coarse, sparse, or noisy labels). Method: We propose TyleR, the first framework to leverage pretrained language models (PLMs) for implicitly extracting fine-grained type semantics directly from local subgraph structures—without requiring human-annotated type labels. TyleR jointly encodes subgraph structural patterns and entity contexts via PLM-based subgraph sampling and generates type-aware node representations, further enhanced by type-aware contrastive learning to improve generalization. Contribution/Results: On standard inductive benchmarks, TyleR significantly outperforms state-of-the-art methods, especially under sparse type supervision and low graph connectivity. Ablation studies confirm that implicit type modeling substantially boosts inductive reasoning capability, demonstrating its effectiveness and robustness in realistic, label-scarce settings.

Technology Category

Application Category

📝 Abstract
Inductive link prediction is emerging as a key paradigm for real-world knowledge graphs (KGs), where new entities frequently appear and models must generalize to them without retraining. Predicting links in a KG faces the challenge of guessing previously unseen entities by leveraging generalizable node features such as subgraph structure, type annotations, and ontological constraints. However, explicit type information is often lacking or incomplete. Even when available, type information in most KGs is often coarse-grained, sparse, and prone to errors due to human annotation. In this work, we explore the potential of pre-trained language models (PLMs) to enrich node representations with implicit type signals. We introduce TyleR, a Type-less yet type-awaRe approach for subgraph-based inductive link prediction that leverages PLMs for semantic enrichment. Experiments on standard benchmarks demonstrate that TyleR outperforms state-of-the-art baselines in scenarios with scarce type annotations and sparse graph connectivity. To ensure reproducibility, we share our code at https://github.com/sisinflab/tyler .
Problem

Research questions and friction points this paper is trying to address.

Predicting links for unseen entities without retraining in knowledge graphs
Addressing incomplete or coarse-grained type annotations in knowledge graphs
Enhancing node representations using pretrained language models for semantic enrichment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pretrained language models for semantic enrichment
Generates implicit type signals without explicit annotations
Improves link prediction in sparse knowledge graphs
🔎 Similar Papers