🤖 AI Summary
To address the dual bottlenecks in long-tailed entity disambiguation—namely, data scarcity for specialized entity linking (EL) models and large language models’ (LLMs) inability to produce structured knowledge base (KB) outputs—this paper proposes LLMAEL. LLMAEL leverages LLMs (e.g., LLaMA, GPT) as zero-shot, plug-and-play context enhancers that generate mention-centered descriptions without fine-tuning; these descriptions are then injected into conventional EL models (e.g., BLINK) for end-to-end inference. Crucially, this design decouples LLM-based knowledge injection from EL decision-making for the first time, balancing generalization and accuracy. Evaluated on six standard benchmarks, the zero-shot variant of LLMAEL significantly outperforms most baselines, while its fine-tuned version establishes new state-of-the-art (SOTA) performance across all datasets, achieving substantial average accuracy gains.
📝 Abstract
Entity Linking (EL) models are well-trained at mapping mentions to their corresponding entities according to a given context. However, EL models struggle to disambiguate long-tail entities due to their limited training data. Meanwhile, large language models (LLMs) are more robust at interpreting uncommon mentions. Yet, due to a lack of specialized training, LLMs suffer at generating correct entity IDs. Furthermore, training an LLM to perform EL is cost-intensive. Building upon these insights, we introduce LLM-Augmented Entity Linking LLMAEL, a plug-and-play approach to enhance entity linking through LLM data augmentation. We leverage LLMs as knowledgeable context augmenters, generating mention-centered descriptions as additional input, while preserving traditional EL models for task specific processing. Experiments on 6 standard datasets show that the vanilla LLMAEL outperforms baseline EL models in most cases, while the fine-tuned LLMAEL set the new state-of-the-art results across all 6 benchmarks.