RADAR: Reasoning as Discrimination with Aligned Representations for LLM-based Knowledge Graph Reasoning

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models often rely on superficial co-occurrence patterns in knowledge graph reasoning, struggling to capture genuine relational semantics, which limits their out-of-distribution generalization. This work proposes a discriminative relation reasoning framework that, for the first time, formulates knowledge graph reasoning as an entity selection task. By aligning representation spaces and enhancing entity separability through reinforcement learning, the model performs reasoning directly in this discriminative space, thereby avoiding hallucinatory generations. The approach substantially improves the robustness and transferability of relational semantics, achieving relative performance gains of 5–6% in link prediction and triple classification across four benchmark datasets, along with a 62.9% increase in task-relevant mutual information in intermediate representations.

Technology Category

Application Category

📝 Abstract
Knowledge graph reasoning (KGR) infers missing facts, with recent advances increasingly harnessing the semantic priors and reasoning abilities of Large Language Models (LLMs). However, prevailing generative paradigms are prone to memorizing surface-level co-occurrences rather than learning genuine relational semantics, limiting out-of-distribution generalization. To address this, we propose RADAR, which reformulates KGR from generative pattern matching to discriminative relational reasoning. We recast KGR as discriminative entity selection, where reinforcement learning enforces relative entity separability beyond token-likelihood imitation. Leveraging this separability, inference operates directly in representation space, ensuring consistency with the discriminative optimization and bypassing generation-induced hallucinations. Across four benchmarks, RADAR achieves 5-6% relative gains on link prediction and triple classification over strong LLM baselines, while increasing task-relevant mutual information in intermediate representations by 62.9%, indicating more robust and transferable relational reasoning.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Graph Reasoning
Large Language Models
Out-of-Distribution Generalization
Relational Semantics
Generative Paradigms
Innovation

Methods, ideas, or system contributions that make the work stand out.

discriminative reasoning
representation alignment
reinforcement learning
knowledge graph reasoning
large language models