🤖 AI Summary
In constrained settings such as LM-KBC, triplet completion faces three key challenges: low-quality generation, difficulty in filtering spurious triplets, and non-robust parsing of LLM outputs.
Method: This paper proposes a lightweight, RAG-free, and fine-tuning-free framework leveraging large language models (LLMs). It enhances information input to improve generation quality, exploits the LLM’s intrinsic discriminative capability for dynamic quality filtering, and introduces a configurable parsing strategy that balances flexibility and consistency according to task requirements.
Contribution/Results: Experiments demonstrate significant improvements in completion accuracy and robustness. The study empirically delineates the applicability boundaries of different parsing strategies, offering a new paradigm for automated knowledge base construction (AKBC) that is efficient, interpretable, and deployment-friendly.
📝 Abstract
RAG and fine-tuning are prevalent strategies for improving the quality of LLM outputs. However, in constrained situations, such as that of the 2025 LM-KBC challenge, such techniques are restricted. In this work we investigate three facets of the triple completion task: generation, quality assurance, and LLM response parsing. Our work finds that in this constrained setting: additional information improves generation quality, LLMs can be effective at filtering poor quality triples, and the tradeoff between flexibility and consistency with LLM response parsing is setting dependent.