🤖 AI Summary
Existing open-source generative large language models (LLMs) suffer from three key limitations in ethical evaluation: narrow assessment dimensions (often focusing on a single ethical attribute), insufficient linguistic coverage (biased toward high-resource languages), and low model diversity. To address these gaps, this study systematically evaluates 29 open-source LLMs across four ethical dimensions—robustness, reliability, safety, and fairness—using a unified, cross-lingual (English/Turkish), multi-model, and multi-dimensional framework. We innovatively employ the LLM-as-a-Judge paradigm, multilingual prompt engineering, and standardized benchmark suites. Our findings reveal that ethical performance is largely language-agnostic yet positively correlated with parameter count; reliability emerges as a pervasive weakness across models; Gemma and Qwen achieve the best overall scores; and targeted safety and fairness enhancements yield significant improvements. This work fills critical gaps in the breadth, linguistic diversity, and model coverage of open-source LLM ethical evaluation.
📝 Abstract
Generative large language models present significant potential but also raise critical ethical concerns. Most studies focus on narrow ethical dimensions, and also limited diversity of languages and models. To address these gaps, we conduct a broad ethical evaluation of 29 recent open-source large language models using a novel data collection including four ethical aspects: Robustness, reliability, safety, and fairness. We analyze model behavior in both a commonly used language, English, and a low-resource language, Turkish. Our aim is to provide a comprehensive ethical assessment and guide safer model development by filling existing gaps in evaluation breadth, language coverage, and model diversity. Our experimental results, based on LLM-as-a-Judge, reveal that optimization efforts for many open-source models appear to have prioritized safety and fairness, and demonstrated good robustness while reliability remains a concern. We demonstrate that ethical evaluation can be effectively conducted independently of the language used. In addition, models with larger parameter counts tend to exhibit better ethical performance, with Gemma and Qwen models demonstrating the most ethical behavior among those evaluated.