🤖 AI Summary
Existing fact evaluation benchmarks are limited to single-entity, single-language settings and overlook reasoning biases induced by prompt language. Method: We introduce FIB, the first multilingual (English, Italian, Turkish), multi-entity factual reasoning bias benchmark, supporting cloze, question answering, and object counting tasks. We propose a novel “prompt language–geographic entity” bias analysis framework and design bias quantification metrics—including the FIB Score—via multilingual knowledge probing, syntax-controlled construction, and cross-lingual entity alignment. Contribution/Results: We empirically reveal language-dependent factual selection bias (e.g., Turkish exhibits strongest bias: 83% of topics have FIB Score > 0.5) and establish a quantitative paradigm for multi- vs. single-entity reasoning difficulty. Experiments on Llama-3.1, Qwen-2.5, and others show strong bias in 31% of topics; English yields best performance; multi-entity tasks are consistently harder than single-entity ones; and larger models (7B/8B) significantly outperform smaller ones (3B–4B).
📝 Abstract
Large language models are widely used across domains, yet there are concerns about their factual reliability and biases. Factual knowledge probing offers a systematic means to evaluate these aspects. Most existing benchmarks focus on single-entity facts and monolingual data. We therefore present FIBER, a multilingual benchmark for evaluating factual knowledge in single- and multi-entity settings. The dataset includes sentence completion, question-answering, and object-count prediction tasks in English, Italian, and Turkish. Using FIBER, we examine whether the prompt language induces inference bias in entity selection and how large language models perform on multi-entity versus single-entity questions. The results indicate that the language of the prompt can influence the model's generated output, particularly for entities associated with the country corresponding to that language. However, this effect varies across different topics such that 31% of the topics exhibit factual inference bias score greater than 0.5. Moreover, the level of bias differs across languages such that Turkish prompts show higher bias compared to Italian in 83% of the topics, suggesting a language-dependent pattern. Our findings also show that models face greater difficulty when handling multi-entity questions than the single-entity questions. Model performance differs across both languages and model sizes. The highest mean average precision is achieved in English, while Turkish and Italian lead to noticeably lower scores. Larger models, including Llama-3.1-8B and Qwen-2.5-7B, show consistently better performance than smaller 3B-4B models.