SEMMA: A Semantic Aware Knowledge Graph Foundation Model

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge graph foundation models (KGFMs) over-rely on graph topology while neglecting textual semantics of entities and relations, severely limiting generalization in zero-shot settings where relational vocabularies are entirely unseen. Method: We propose the first dual-channel framework integrating textual semantics and graph structure: leveraging large language models (LLMs) to generate semantic relation graphs, constructing a structure–semantics dual-graph representation, and designing a multi-graph fusion encoder for explicit joint modeling. Contribution/Results: We theoretically and empirically establish— for the first time—that textual semantics is decisive in extreme inductive tasks. Evaluated across 54 heterogeneous knowledge graphs, our method significantly outperforms purely structural baselines (e.g., ULTRA). Under the full-relation-unseen setting, it achieves double the performance of structural methods and sets a new state-of-the-art for zero-shot link prediction.

Technology Category

Application Category

📝 Abstract
Knowledge Graph Foundation Models (KGFMs) have shown promise in enabling zero-shot reasoning over unseen graphs by learning transferable patterns. However, most existing KGFMs rely solely on graph structure, overlooking the rich semantic signals encoded in textual attributes. We introduce SEMMA, a dual-module KGFM that systematically integrates transferable textual semantics alongside structure. SEMMA leverages Large Language Models (LLMs) to enrich relation identifiers, generating semantic embeddings that subsequently form a textual relation graph, which is fused with the structural component. Across 54 diverse KGs, SEMMA outperforms purely structural baselines like ULTRA in fully inductive link prediction. Crucially, we show that in more challenging generalization settings, where the test-time relation vocabulary is entirely unseen, structural methods collapse while SEMMA is 2x more effective. Our findings demonstrate that textual semantics are critical for generalization in settings where structure alone fails, highlighting the need for foundation models that unify structural and linguistic signals in knowledge reasoning.
Problem

Research questions and friction points this paper is trying to address.

Integrating textual semantics with graph structure in KGFMs
Improving generalization in unseen relation vocabularies
Enhancing zero-shot reasoning over diverse knowledge graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates textual semantics with graph structure
Uses LLMs to enrich relation identifiers
Fuses textual relation graph with structural component
🔎 Similar Papers
No similar papers found.