Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) struggle with fine-grained ethical reasoning tasks—such as detecting implicit hate speech, offensive language, and gender bias—and exhibit ethical risks and output inconsistency due to training data biases. To address this, we propose a human-centered content moderation paradigm: (1) we construct the first unified benchmark dataset covering 49 fine-grained emotion and bias categories; (2) we design a human-in-the-loop evaluation framework built upon state-of-the-art LLMs; and (3) we develop SafePhi—a lightweight, deployable ethics-aligned moderator—via QLoRA fine-tuning of Phi-4 for diverse ethical contexts. Experiments show SafePhi achieves a Macro F1 score of 0.89, significantly outperforming OpenAI Moderator (0.77) and Llama Guard (0.74). Our core contributions are: (i) the first fine-grained, unified ethical bias benchmark; (ii) a human-first moderation framework; and (iii) an efficient, production-ready ethical alignment model for content safety.

Technology Category

Application Category

📝 Abstract
As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Macro F1 score of 0.89, where OpenAI Moderator and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-the-loop, for better model robustness and explainability.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM moderators' limitations in detecting implicit hate and biases
Addressing inconsistencies in AI moderation due to subjective contextual issues
Proposing human-in-the-loop approaches for robust and explainable moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified benchmark dataset with 49 categories
SafePhi: QLoRA fine-tuned Phi-4 model
Human-in-the-loop for robustness and explainability
Naseem Machlovi
Naseem Machlovi
Fordham University
NLPAIMLCloud Computing
Maryam Saleki
Maryam Saleki
PhD student Fordham University
Deep learningMachine learningConversational AI
I
Innocent Ababio
Fordham University, New York, NY 10023, USA
R
Ruhul Amin
Fordham University, New York, NY 10023, USA