🤖 AI Summary
This study identifies a pervasive “epistemic miscalibration” problem in large language models (LLMs) deployed across global languages: LLMs consistently overgenerate definitive epistemic markers (e.g., “It’s definitely”) across five languages—including English and Japanese—inducing excessive user reliance, especially in English. Through cross-linguistic analysis of epistemic marker distributions and multilingual user-dependence experiments, we first systematically demonstrate that uncertainty expression generation patterns—and corresponding human trust responses—exhibit strong language- and culture-specificity; notably, Japanese users exhibit paradoxically higher trust in uncertain utterances. Moving beyond monolingual safety evaluation paradigms, our work establishes that linguistic structural variation and culturally embedded epistemic norms must be jointly integrated into LLM safety assessment frameworks. This provides both theoretical grounding and methodological guidance for trustworthy multilingual AI deployment.
📝 Abstract
As large language models (LLMs) are deployed globally, it is crucial that their responses are calibrated across languages to accurately convey uncertainty and limitations. Previous work has shown that LLMs are linguistically overconfident in English, leading users to overrely on confident generations. However, the usage and interpretation of epistemic markers (e.g., 'It's definitely,' 'I think') can differ sharply across languages. Here, we study the risks of multilingual linguistic (mis)calibration, overconfidence, and overreliance across five languages to evaluate the safety of LLMs in a global context.
We find that overreliance risks are high across all languages. We first analyze the distribution of LLM-generated epistemic markers, and observe that while LLMs are cross-linguistically overconfident, they are also sensitive to documented linguistic variation. For example, models generate the most markers of uncertainty in Japanese and the most markers of certainty in German and Mandarin. We then measure human reliance rates across languages, finding that while users strongly rely on confident LLM generations in all languages, reliance behaviors differ cross-linguistically: for example, users rely significantly more on expressions of uncertainty in Japanese than in English. Taken together, these results indicate high risk of reliance on overconfident model generations across languages. Our findings highlight the challenges of multilingual linguistic calibration and stress the importance of culturally and linguistically contextualized model safety evaluations.