OpenEthics: A Comprehensive Ethical Evaluation of Open-Source Generative Large Language Models

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source generative large language models (LLMs) suffer from three key limitations in ethical evaluation: narrow assessment dimensions (often focusing on a single ethical attribute), insufficient linguistic coverage (biased toward high-resource languages), and low model diversity. To address these gaps, this study systematically evaluates 29 open-source LLMs across four ethical dimensions—robustness, reliability, safety, and fairness—using a unified, cross-lingual (English/Turkish), multi-model, and multi-dimensional framework. We innovatively employ the LLM-as-a-Judge paradigm, multilingual prompt engineering, and standardized benchmark suites. Our findings reveal that ethical performance is largely language-agnostic yet positively correlated with parameter count; reliability emerges as a pervasive weakness across models; Gemma and Qwen achieve the best overall scores; and targeted safety and fairness enhancements yield significant improvements. This work fills critical gaps in the breadth, linguistic diversity, and model coverage of open-source LLM ethical evaluation.

Technology Category

Application Category

📝 Abstract
Generative large language models present significant potential but also raise critical ethical concerns. Most studies focus on narrow ethical dimensions, and also limited diversity of languages and models. To address these gaps, we conduct a broad ethical evaluation of 29 recent open-source large language models using a novel data collection including four ethical aspects: Robustness, reliability, safety, and fairness. We analyze model behavior in both a commonly used language, English, and a low-resource language, Turkish. Our aim is to provide a comprehensive ethical assessment and guide safer model development by filling existing gaps in evaluation breadth, language coverage, and model diversity. Our experimental results, based on LLM-as-a-Judge, reveal that optimization efforts for many open-source models appear to have prioritized safety and fairness, and demonstrated good robustness while reliability remains a concern. We demonstrate that ethical evaluation can be effectively conducted independently of the language used. In addition, models with larger parameter counts tend to exhibit better ethical performance, with Gemma and Qwen models demonstrating the most ethical behavior among those evaluated.
Problem

Research questions and friction points this paper is trying to address.

Evaluates ethical concerns in open-source large language models
Assesses models across robustness, reliability, safety, and fairness
Examines performance in both English and low-resource Turkish
Innovation

Methods, ideas, or system contributions that make the work stand out.

Broad ethical evaluation of 29 open-source LLMs
Novel data collection covering four ethical aspects
Evaluation in English and low-resource Turkish language
B
Burak Erincc cCetin
Middle East Technical University, Computer Engineering Department, Applied NLP Group
Y
Yildirim Ozen
Middle East Technical University, Computer Engineering Department, Applied NLP Group
E
Elif Naz Demiryilmaz
Middle East Technical University, Computer Engineering Department, Applied NLP Group
K
Kaan Engur
Middle East Technical University, Computer Engineering Department, Applied NLP Group
Cagri Toraman
Cagri Toraman
Asst. Prof. Middle East Technical University, Department of Computer Engineering
natural language processinginformation retrievalsocial computing