UCSC at SemEval-2025 Task 3: Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of precisely localizing hallucinations in knowledge-intensive question answering (KIQA), this paper proposes a three-stage fine-grained detection framework: context retrieval → erroneous content identification → hallucination span backtracking and annotation, integrated with a prompt-aware automatic optimization mechanism. The method synergistically combines retrieval-augmented generation (RAG), sequence labeling modeling, gradient-based prompt optimization, and multilingual alignment-aware feature representation, enabling, for the first time, token-level cross-lingual hallucination localization. The end-to-end trainable pipeline jointly optimizes detection accuracy and cross-task generalizability. Evaluated on SemEval-2025 Mu-SHROOM—a multilingual hallucination benchmark—the approach achieves first place overall and the lowest average localization error. All code and experimental data are publicly released.

Technology Category

Application Category

📝 Abstract
Hallucinations pose a significant challenge for large language models when answering knowledge-intensive queries. As LLMs become more widely adopted, it is crucial not only to detect if hallucinations occur but also to pinpoint exactly where in the LLM output they occur. SemEval 2025 Task 3, Mu-SHROOM: Multilingual Shared-task on Hallucinations and Related Observable Overgeneration Mistakes, is a recent effort in this direction. This paper describes the UCSC system submission to the shared Mu-SHROOM task. We introduce a framework that first retrieves relevant context, next identifies false content from the answer, and finally maps them back to spans in the LLM output. The process is further enhanced by automatically optimizing prompts. Our system achieves the highest overall performance, ranking #1 in average position across all languages. We release our code and experiment results.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations in LLM outputs for knowledge queries
Pinpoint exact locations of hallucinations in LLM responses
Optimize prompts to enhance hallucination detection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieves relevant context for verification
Identifies false content in LLM answers
Optimizes prompts automatically for enhancement
S
Sicong Huang
University of California, Santa Cruz
Jincheng He
Jincheng He
University of Southern California
Software EngineeringData Mining
Shiyuan Huang
Shiyuan Huang
University of California, Santa Cruz
K
Karthik Raja Anandan
University of California, Santa Cruz
A
Arkajyoti Chakraborty
University of California, Santa Cruz
Ian Lane
Ian Lane
Carnegie Mellon University