🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated content, and cross-lingual reliability assessment remains unexplored. Method: We introduce MlingConf, a multilingual reliability evaluation benchmark comprising language-agnostic (LA) and language-specific (LS) tasks. It features a high-quality, manually annotated multilingual dataset and establishes— for the first time—a unified framework for multilingual confidence estimation. Contribution/Results: Our analysis reveals a pronounced language-dominance effect in English for LA tasks. For LS tasks, native-tone prompting—i.e., eliciting confidence estimates using native-language instructions—improves average accuracy by 12.3%. This simple, efficient strategy significantly enhances LLMs’ culturally adaptive reliability, offering both a novel methodological approach and empirical foundation for trustworthy multilingual AI.
📝 Abstract
The tendency of Large Language Models (LLMs) to generate hallucinations raises concerns regarding their reliability. Therefore, confidence estimations indicating the extent of trustworthiness of the generations become essential. However, current LLM confidence estimations in languages other than English remain underexplored. This paper addresses this gap by introducing a comprehensive investigation of Multilingual Confidence estimation (MlingConf) on LLMs, focusing on both language-agnostic (LA) and language-specific (LS) tasks to explore the performance and language dominance effects of multilingual confidence estimations on different tasks. The benchmark comprises four meticulously checked and human-evaluated high-quality multilingual datasets for LA tasks and one for the LS task tailored to specific social, cultural, and geographical contexts of a language. Our experiments reveal that on LA tasks English exhibits notable linguistic dominance in confidence estimations than other languages, while on LS tasks, using question-related language to prompt LLMs demonstrates better linguistic dominance in multilingual confidence estimations. The phenomena inspire a simple yet effective native-tone prompting strategy by employing language-specific prompts for LS tasks, effectively improving LLMs' reliability and accuracy in LS scenarios.