Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of identifying and filtering harmful content—including hate speech, misinformation, and social biases—present in large-scale web corpora (e.g., Common Crawl, C4) used for LLM pretraining. Methodologically, we propose: (1) a novel dual-dimensional harm taxonomy (Topical/Toxic); (2) HarmFormer, a dedicated toxicity detection model, and TTP, a prompt-driven evaluation framework; (3) a rule-model hybrid, multi-stage filtering pipeline; and (4) an adversarial toxicity injection analysis mechanism. Our contributions include: (1) HAVOC, the first multi-harm toxicity benchmark tailored to open-generation settings; (2) public release of model-level safety signals across the full C4 corpus; and (3) state-of-the-art detection performance (Topical F1 = 0.92, Toxic F1 = 0.89), yielding a 37% reduction in downstream toxic outputs during pretraining and significantly improved RAI compliance.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have become integral to various real-world applications, leveraging massive, web-sourced datasets like Common Crawl, C4, and FineWeb for pretraining. While these datasets provide linguistic data essential for high-quality natural language generation, they often contain harmful content, such as hate speech, misinformation, and biased narratives. Training LLMs on such unfiltered data risks perpetuating toxic behaviors, spreading misinformation, and amplifying societal biases which can undermine trust in LLM-driven applications and raise ethical concerns about their use. This paper presents a large-scale analysis of inappropriate content across these datasets, offering a comprehensive taxonomy that categorizes harmful webpages into Topical and Toxic based on their intent. We also introduce a prompt evaluation dataset, a high-accuracy Topical and Toxic Prompt (TTP), and a transformer-based model (HarmFormer) for content filtering. Additionally, we create a new multi-harm open-ended toxicity benchmark (HAVOC) and provide crucial insights into how models respond to adversarial toxic inputs. Upon publishing, we will also opensource our model signal on the entire C4 dataset. Our work offers insights into ensuring safer LLM pretraining and serves as a resource for Responsible AI (RAI) compliance.
Problem

Research questions and friction points this paper is trying to address.

Analyzing harmful content in webscale datasets for LLMs
Filtering toxic and biased content to ensure safer pretraining
Developing tools for responsible AI compliance in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale harmful content analysis in datasets
Transformer-based HarmFormer for content filtering
New toxicity benchmark HAVOC for evaluation
🔎 Similar Papers
2024-06-27Journal of Mathematical & Computer ApplicationsCitations: 2
S
Sai Krishna Mendu
Microsoft
H
H. Yenala
Microsoft
A
Aditi Gulati
Microsoft
Shanu Kumar
Shanu Kumar
Mohammed Bin Zayed University of Artificial Intelligence
Machine LearningNatural Language ProcessingComputer vision
P
Parag Agrawal
Microsoft