🤖 AI Summary
Existing LLM trustworthiness research is predominantly English-centric, leaving trustworthiness performance in low-resource languages poorly understood. Method: We introduce XTRUST, the first multilingual trustworthiness benchmark covering 10 languages and nine dimensions—illicit behavior, hallucination, health, toxicity, fairness, etc.—alongside a unified multilingual evaluation framework. This framework features cross-lingually comparable fine-grained tasks, standardized metrics, multilingual prompt engineering, cross-lingual consistency verification, domain-adaptive test set construction, out-of-distribution robustness testing, and dual human–automated annotation. Contribution/Results: Experiments reveal significant trustworthiness degradation in mainstream LLMs for low-resource languages such as Arabic and Russian. We publicly release the benchmark, evaluation toolkit, and comprehensive results to advance research and deployment of trustworthy multilingual AI.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across a range of natural language processing (NLP) tasks, capturing the attention of both practitioners and the broader public. A key question that now preoccupies the AI community concerns the capabilities and limitations of these models, with trustworthiness emerging as a central issue, particularly as LLMs are increasingly applied in sensitive fields like healthcare and finance, where errors can have serious consequences. However, most previous studies on the trustworthiness of LLMs have been limited to a single language, typically the predominant one in the dataset, such as English. In response to the growing global deployment of LLMs, we introduce XTRUST, the first comprehensive multilingual trustworthiness benchmark. XTRUST encompasses a diverse range of topics, including illegal activities, hallucination, out-of-distribution (OOD) robustness, physical and mental health, toxicity, fairness, misinformation, privacy, and machine ethics, across 10 different languages. Using XTRUST, we conduct an empirical evaluation of the multilingual trustworthiness of five widely used LLMs, offering an in-depth analysis of their performance across languages and tasks. Our results indicate that many LLMs struggle with certain low-resource languages, such as Arabic and Russian, highlighting the considerable room for improvement in the multilingual trustworthiness of current language models. The code is available at https://github.com/LluckyYH/XTRUST.