🤖 AI Summary
This work addresses the challenge of identifying and filtering harmful content—including hate speech, misinformation, and social biases—present in large-scale web corpora (e.g., Common Crawl, C4) used for LLM pretraining. Methodologically, we propose: (1) a novel dual-dimensional harm taxonomy (Topical/Toxic); (2) HarmFormer, a dedicated toxicity detection model, and TTP, a prompt-driven evaluation framework; (3) a rule-model hybrid, multi-stage filtering pipeline; and (4) an adversarial toxicity injection analysis mechanism. Our contributions include: (1) HAVOC, the first multi-harm toxicity benchmark tailored to open-generation settings; (2) public release of model-level safety signals across the full C4 corpus; and (3) state-of-the-art detection performance (Topical F1 = 0.92, Toxic F1 = 0.89), yielding a 37% reduction in downstream toxic outputs during pretraining and significantly improved RAI compliance.
📝 Abstract
Large language models (LLMs) have become integral to various real-world applications, leveraging massive, web-sourced datasets like Common Crawl, C4, and FineWeb for pretraining. While these datasets provide linguistic data essential for high-quality natural language generation, they often contain harmful content, such as hate speech, misinformation, and biased narratives. Training LLMs on such unfiltered data risks perpetuating toxic behaviors, spreading misinformation, and amplifying societal biases which can undermine trust in LLM-driven applications and raise ethical concerns about their use. This paper presents a large-scale analysis of inappropriate content across these datasets, offering a comprehensive taxonomy that categorizes harmful webpages into Topical and Toxic based on their intent. We also introduce a prompt evaluation dataset, a high-accuracy Topical and Toxic Prompt (TTP), and a transformer-based model (HarmFormer) for content filtering. Additionally, we create a new multi-harm open-ended toxicity benchmark (HAVOC) and provide crucial insights into how models respond to adversarial toxic inputs. Upon publishing, we will also opensource our model signal on the entire C4 dataset. Our work offers insights into ensuring safer LLM pretraining and serves as a resource for Responsible AI (RAI) compliance.