Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge graph embedding (KGE) methods suffer from two key limitations: (i) they rely on local contrastive learning objectives that are misaligned with downstream machine learning tasks, and (ii) they are constrained by GPU memory, hindering scalability to ultra-large-scale graphs. To address these challenges, we propose SEPAL—a novel KGE framework featuring global embedding alignment. SEPAL first jointly optimizes task-aware embeddings over a core entity subset, then generalizes representations to the full graph via scalable message passing. This embedding propagation design eliminates reliance on high-end GPUs and enables training on commodity hardware for graphs containing up to 10 billion triples. Extensive experiments across seven large-scale knowledge graphs and 46 downstream tasks demonstrate that SEPAL consistently outperforms state-of-the-art methods, achieving simultaneous gains in both efficiency and predictive performance.

Technology Category

Application Category

📝 Abstract
Many machine learning tasks can benefit from external knowledge. Large knowledge graphs store such knowledge, and embedding methods can be used to distill it into ready-to-use vector representations for downstream applications. For this purpose, current models have however two limitations: they are primarily optimized for link prediction, via local contrastive learning, and they struggle to scale to the largest graphs due to GPU memory limits. To address these, we introduce SEPAL: a Scalable Embedding Propagation ALgorithm for large knowledge graphs designed to produce high-quality embeddings for downstream tasks at scale. The key idea of SEPAL is to enforce global embedding alignment by optimizing embeddings only on a small core of entities, and then propagating them to the rest of the graph via message passing. We evaluate SEPAL on 7 large-scale knowledge graphs and 46 downstream machine learning tasks. Our results show that SEPAL significantly outperforms previous methods on downstream tasks. In addition, SEPAL scales up its base embedding model, enabling fitting huge knowledge graphs on commodity hardware.
Problem

Research questions and friction points this paper is trying to address.

Enhancing downstream ML with scalable knowledge graph embeddings
Overcoming GPU memory limits for large knowledge graphs
Improving embedding quality beyond link prediction optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable embedding propagation for large graphs
Global alignment via core entity optimization
Message passing for efficient knowledge transfer
🔎 Similar Papers
No similar papers found.