Detecting and Filtering Unsafe Training Data via Data Attribution

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In LLM training, even a small fraction of unsafe data can trigger harmful behaviors; existing supervised filtering methods—relying on predefined taxonomies—are computationally expensive, poorly generalizable, and fail to characterize intrinsic training dynamics. Method: We propose DABUF, the first framework that dynamically synergizes influence/gradient-based data attribution with a lightweight auditing classifier: for complex harmful outputs (e.g., jailbreak samples), it focuses attribution on high-risk subsets; for simple harmful outputs, it directly attributes model behavior—eliminating taxonomy dependence and black-box training constraints. Contribution/Results: On jailbreak data filtering, DABUF achieves a 7.5% improvement in AUPRC; on gender bias detection, classification accuracy increases by 44.1%. Retraining with DABUF-filtered data significantly enhances model safety across diverse risk types, demonstrating strong cross-risk generalization.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are vulnerable to unsafe training data that even small amounts of unsafe data can lead to harmful model behaviors. Detecting and filtering such unsafe training data is essential for trustworthy model development. Current state-of-the-art (SOTA) approaches typically rely on training moderation classifiers which requires significant computational overhead and are limited to predefined taxonomies, making them less adaptable to evolving safety concerns. Moreover, these classifiers lack insight into the training process, limiting their effectiveness in filtering unsafe data. To address these limitations, we propose DABUF, leveraging data attribution to detect and filter unsafe training data by attributing harmful model outputs to influential training data points. DABUF enables flexible identification of various unsafe data types without predefined taxonomies. However, in practice, model outputs can be complex with combined safe linguistic features and unsafe content, leading to reduced attribution accuracy. In such cases, DABUF will integrate moderation classifiers to identify a minimal subset of unsafe training data for targeted attribution (such as jailbreak). When model outputs are relatively straightforward, DABUF uses model outputs directly as the attribution targets. We evaluate the performance on two different tasks: in filtering jailbreaking training data and in identifying and mitigating gender bias. DABUF outperforms SOTA approaches by up to 7.5% in detection AUPRC in jailbreaking scenarios, and 44.1% in detecting gender bias. Moreover, retraining on DABUF-filtered data leads to higher model safety across experiments, underscoring its versatility in addressing a broad spectrum of unsafe data issues.
Problem

Research questions and friction points this paper is trying to address.

Detecting unsafe training data in LLMs
Filtering harmful data without predefined taxonomies
Improving model safety through targeted attribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data attribution for unsafe data detection
Integration of moderation classifiers
Flexible identification without predefined taxonomies
🔎 Similar Papers
No similar papers found.