Improving LLMs' Learning for Coreference Resolution

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the pervasive hallucination and suboptimal performance of large language models (LLMs) in coreference resolution, this paper proposes a novel learning framework integrating question-answering templates with structured document templates. Methodologically, it introduces (1) a reverse-training joint inference mechanism that explicitly models bidirectional dependencies within coreference chains, and (2) iterative document generation, which dynamically refines input text during inference to suppress hallucinations induced by source documents. Experiments on standard benchmarks—including OntoNotes—demonstrate substantial improvements in F1 score (+3.2–5.7 points), enhanced robustness, and superior handling of long-distance and nested coreference relations. This work establishes a new paradigm for LLM-based coreference resolution: interpretable, controllable, and low-hallucination.

Technology Category

Application Category

📝 Abstract
Coreference Resolution (CR) is crucial for many NLP tasks, but existing LLMs struggle with hallucination and under-performance. In this paper, we investigate the limitations of existing LLM-based approaches to CR-specifically the Question-Answering (QA) Template and Document Template methods and propose two novel techniques: Reversed Training with Joint Inference and Iterative Document Generation. Our experiments show that Reversed Training improves the QA Template method, while Iterative Document Generation eliminates hallucinations in the generated source text and boosts coreference resolution. Integrating these methods and techniques offers an effective and robust solution to LLM-based coreference resolution.
Problem

Research questions and friction points this paper is trying to address.

Addressing LLM hallucination in coreference resolution
Improving Question-Answering Template method performance
Eliminating hallucinations in generated source text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reversed Training with Joint Inference
Iterative Document Generation
Integrating methods for robust solution
🔎 Similar Papers
No similar papers found.