๐ค AI Summary
This study addresses a critical gap in automated essay scoring (AES) systems and large language models (LLMs), which often fail to detect harmful contentโsuch as racism or gender biasโin argumentative essays and may erroneously assign high scores to such texts. To tackle this issue, the work introduces an ethical dimension into AES evaluation for the first time, presenting the Harmful Essay Detection (HED) benchmark dataset specifically designed to assess the ability to identify harmful arguments. It also proposes a corresponding ethical evaluation framework. Experimental results demonstrate that prevailing LLMs and AES systems generally lack the capacity to distinguish between harmful and legitimate reasoning, revealing a significant blind spot in their moral judgment. This research establishes a foundational benchmark and offers a clear direction for developing ethically aware next-generation AES systems.
๐ Abstract
This study addresses critical gaps in Automated Essay Scoring (AES) systems and Large Language Models (LLMs) with regard to their ability to effectively identify and score harmful essays. Despite advancements in AES technology, current models often overlook ethically and morally problematic elements within essays, erroneously assigning high scores to essays that may propagate harmful opinions. In this study, we introduce the Harmful Essay Detection (HED) benchmark, which includes essays integrating sensitive topics such as racism and gender bias, to test the efficacy of various LLMs in recognizing and scoring harmful content. Our findings reveal that: (1) LLMs require further enhancement to accurately distinguish between harmful and argumentative essays, and (2) both current AES models and LLMs fail to consider the ethical dimensions of content during scoring. The study underscores the need for developing more robust AES systems that are sensitive to the ethical implications of the content they are scoring.