🤖 AI Summary
This work addresses the challenge of zero-shot entity linking in cross-domain scenarios without fine-tuning by proposing a modular coarse-to-fine reasoning framework based on large language models (LLMs). The approach enables plug-and-play adaptation across diverse domains, knowledge bases, and LLMs entirely in a zero-shot manner, eliminating the need for any model fine-tuning. Evaluated on multiple standard entity linking benchmarks, the method substantially outperforms existing non-fine-tuned approaches and achieves performance comparable to—or even surpassing—that of state-of-the-art models requiring fine-tuning, thereby demonstrating strong generalization capabilities and practical utility.
📝 Abstract
Entity linking (mapping ambiguous mentions in text to entities in a knowledge base) is a foundational step in tasks such as knowledge graph construction, question-answering, and information extraction. Our method, LELA, is a modular coarse-to-fine approach that leverages the capabilities of large language models (LLMs), and works with different target domains, knowledge bases and LLMs, without any fine-tuning phase. Our experiments across various entity linking settings show that LELA is highly competitive with fine-tuned approaches, and substantially outperforms the non-fine-tuned ones.