đ€ AI Summary
Low accuracy and high ambiguity in entity linking for short-text question answering remain critical challenges. This paper proposes a large language model (LLM)-based entity linking agent framework that emulates human cognitive processes by decoupling entity recognition, candidate retrieval, and disambiguation into collaborative, modular agents. The framework supports active reasoning and tool invocationâincluding knowledge base queries and context expansionâto enable context-aware, end-to-end linking. Its core innovation lies in a dynamic, agent-driven reasoning mechanism that overcomes the rigidity and error propagation inherent in conventional pipeline approaches. Extensive experiments on multiple short-text QA benchmarks demonstrate substantial improvements in linking accuracy. The method consistently outperforms state-of-the-art baselines in both tool-augmented linking and end-to-end QA tasks, validating its robustness and effectiveness.
đ Abstract
Some Question Answering (QA) systems rely on knowledge bases (KBs) to provide accurate answers. Entity Linking (EL) plays a critical role in linking natural language mentions to KB entries. However, most existing EL methods are designed for long contexts and do not perform well on short, ambiguous user questions in QA tasks. We propose an entity linking agent for QA, based on a Large Language Model that simulates human cognitive workflows. The agent actively identifies entity mentions, retrieves candidate entities, and makes decision. To verify the effectiveness of our agent, we conduct two experiments: tool-based entity linking and QA task evaluation. The results confirm the robustness and effectiveness of our agent.