🤖 AI Summary
Hallucinations and over-generation in multilingual large language models (LLMs) remain poorly understood and inadequately evaluated across languages. Method: This work introduces the first large-scale, shared multilingual hallucination detection task covering 14 languages, formalized as a cross-lingual span-level annotation problem. We propose a unified multilingual hallucination span annotation framework, uncovering cross-lingual disparities in hallucination distribution and annotator disagreement—thereby advancing detection from coarse-grained binary classification toward fine-grained localization. Contribution/Results: Leveraging span-level annotations, multilingual LLM output evaluation, and cross-lingual benchmarking, we organized a competition with 43 teams submitting 2,618 systems, establishing the first authoritative multilingual hallucination detection baseline. Empirical analysis identifies model scale, language resource coverage, and post-processing strategies as the three primary determinants of detection performance.
📝 Abstract
We present the Mu-SHROOM shared task which is focused on detecting hallucinations and other overgeneration mistakes in the output of instruction-tuned large language models (LLMs). Mu-SHROOM addresses general-purpose LLMs in 14 languages, and frames the hallucination detection problem as a span-labeling task. We received 2,618 submissions from 43 participating teams employing diverse methodologies. The large number of submissions underscores the interest of the community in hallucination detection. We present the results of the participating systems and conduct an empirical analysis to identify key factors contributing to strong performance in this task. We also emphasize relevant current challenges, notably the varying degree of hallucinations across languages and the high annotator disagreement when labeling hallucination spans.