🤖 AI Summary
To address the challenge of jointly modeling textual semantics and graph structural information in knowledge graph completion, this paper proposes a prompt-based disentangled representation framework. Methodologically, it is the first to integrate prompt tuning, disentangled variational autoencoding, and contrastive learning regularization: a pretrained language model (PLM) encodes contextual semantics of entities and relations; a disentangled encoder explicitly separates relational semantics from entity-role semantics; and contrastive learning regularizes the embedding space to preserve structural consistency. On standard benchmarks including FB15k-237, the approach achieves a 3.2% improvement in mean reciprocal rank (MRR) for link prediction, substantially outperforming TransE and GNN-based methods. Moreover, it enhances model generalization and interpretability by decoupling semantic and structural representations. This work establishes a novel paradigm for KG completion that balances deep semantic understanding with explicit structural clarity.