Knowledge Restoration-driven Prompt Optimization: Unlocking LLM Potential for Open-Domain Relational Triplet Extraction

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current large language models in open-domain relation triple extraction, which rely on static prompts and lack mechanisms to reflect on error patterns, particularly struggling with semantically ambiguous scenarios. To overcome this, the authors propose a Knowledge Reconstruction-driven Prompt Optimization (KRPO) framework that generates intrinsic feedback through self-assessment based on knowledge reconstruction and iteratively refines prompts via a text-gradient-driven optimizer, thereby internalizing historical experience. Additionally, a relation normalization memory module is introduced to mitigate relation redundancy and enhance semantic discriminability. Experimental results demonstrate that KRPO significantly outperforms strong baselines across three open-domain datasets, achieving notable improvements in F1 scores.

Technology Category

Application Category

📝 Abstract
Open-domain Relational Triplet Extraction (ORTE) is the foundation for mining structured knowledge without predefined schemas. Despite the impressive in-context learning capabilities of Large Language Models (LLMs), existing methods are hindered by their reliance on static, heuristic-driven prompting strategies. Due to the lack of reflection mechanisms required to internalize erroneous signals, these methods exhibit vulnerability in semantic ambiguity, often making erroneous extraction patterns permanent. To address this bottleneck, we propose a Knowledge Reconstruction-driven Prompt Optimization (KRPO) framework to assist LLMs in continuously improving their extraction capabilities for complex ORTE task flows. Specifically, we design a self-evaluation mechanism based on knowledge restoration, which provides intrinsic feedback signals by projecting structured triplets into semantic consistency scores. Subsequently, we propose a prompt optimizer based on a textual gradient that can internalize historical experiences to iteratively optimize prompts, which can better guide LLMs to handle subsequent extraction tasks. Furthermore, to alleviate relation redundancy, we design a relation canonicalization memory that collects representative relations and provides semantically distinct schemas for the triplets. Extensive experiments across three datasets show that KRPO significantly outperforms strong baselines in the extraction F1 score.
Problem

Research questions and friction points this paper is trying to address.

Open-Domain Relational Triplet Extraction
Large Language Models
Prompt Optimization
Semantic Ambiguity
Knowledge Restoration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Restoration
Prompt Optimization
Relational Triplet Extraction
Textual Gradient
Relation Canonicalization
🔎 Similar Papers
No similar papers found.
X
Xiaonan Jing
The Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), Hefei University of Technology, China; School of Computer Science and Information Engineering, Hefei University of Technology, China
Gongqing Wu
Gongqing Wu
Hefei University of Technology
Web IntelligenceData Mining
X
Xingrui Zhuo
The Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), Hefei University of Technology, China; School of Computer Science and Information Engineering, Hefei University of Technology, China
L
Lang Sun
Anhui Zhongke Guojin Intelligent Technology Co., Ltd., China
J
Jiapu Wang
Nanjing University of Science and Technology