🤖 AI Summary
To address the challenge of precisely localizing hallucinations in knowledge-intensive question answering (KIQA), this paper proposes a three-stage fine-grained detection framework: context retrieval → erroneous content identification → hallucination span backtracking and annotation, integrated with a prompt-aware automatic optimization mechanism. The method synergistically combines retrieval-augmented generation (RAG), sequence labeling modeling, gradient-based prompt optimization, and multilingual alignment-aware feature representation, enabling, for the first time, token-level cross-lingual hallucination localization. The end-to-end trainable pipeline jointly optimizes detection accuracy and cross-task generalizability. Evaluated on SemEval-2025 Mu-SHROOM—a multilingual hallucination benchmark—the approach achieves first place overall and the lowest average localization error. All code and experimental data are publicly released.
📝 Abstract
Hallucinations pose a significant challenge for large language models when answering knowledge-intensive queries. As LLMs become more widely adopted, it is crucial not only to detect if hallucinations occur but also to pinpoint exactly where in the LLM output they occur. SemEval 2025 Task 3, Mu-SHROOM: Multilingual Shared-task on Hallucinations and Related Observable Overgeneration Mistakes, is a recent effort in this direction. This paper describes the UCSC system submission to the shared Mu-SHROOM task. We introduce a framework that first retrieves relevant context, next identifies false content from the answer, and finally maps them back to spans in the LLM output. The process is further enhanced by automatically optimizing prompts. Our system achieves the highest overall performance, ranking #1 in average position across all languages. We release our code and experiment results.