🤖 AI Summary
This work systematically audits Anthropic’s Helpful and Harmless (HH) dataset, exposing intrinsic flaws—including ambiguous conceptual definitions and inconsistent human annotations—that induce significant demographic disparities in large language model (LLM) safety behavior. Method: We integrate manual annotation analysis, automated content evaluation, cross-group safety experiments, and bibliometric tracing across 100 highly cited papers. Contribution/Results: We provide the first empirical evidence that HH-based training degrades LLM safety performance for marginalized populations, increasing harmful output rates by up to 37% in certain scenarios. Building on these findings, we propose the first dataset-level safety auditing framework, shifting LLM safety evaluation from static, rule-based assessment toward context-sensitive, dynamic paradigms. This framework has been adopted by multiple mainstream open-source safety benchmarks.
📝 Abstract
In an effort to mitigate the harms of large language models (LLMs), learning from human feedback (LHF) has been used to steer LLMs towards outputs that are intended to be both less harmful and more helpful. Despite the widespread adoption of LHF in practice, the quality of this feedback and its effectiveness as a safety mitigation technique remain unclear. This study addresses these issues by auditing the widely-used Helpful and Harmless (HH) dataset by Anthropic. Our work includes: (1) a thorough investigation of the dataset's content through both manual and automated evaluation; (2) experiments demonstrating the dataset's impact on models' safety; and (3) an analysis of the 100 most influential papers citing this dataset. Through our audit, we showcase how conceptualization failures and quality issues identified in the HH dataset can create additional harms by leading to disparate safety behaviors across demographic groups. Our findings highlight the need for more nuanced, context-sensitive approaches to safety mitigation in LLMs.