TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of toxicity proliferation and degraded dialogue quality when fine-tuning LLM-based chatbots on untrusted conversational data, this paper proposes TuneShield—a lightweight, adaptive defense framework. Methodologically, TuneShield leverages the LLM’s intrinsic instruction-following and safety-alignment capabilities for fine-grained toxicity detection, then generates semantically coherent and stylistically consistent synthetic “healing data” to enable robust fine-tuning. Crucially, it requires no human annotation or external toxicity classifiers. Contributions include: (1) a self-supervised, model-intrinsic detection mechanism; (2) dynamic, context-aware healing data synthesis; and (3) seamless integration into standard fine-tuning pipelines. Experiments demonstrate that TuneShield reduces toxic outputs by over 60% on average across diverse toxicity injection and jailbreaking attacks, while preserving dialogue fluency, informativeness, and response relevance—outperforming all baseline defense methods.

Technology Category

Application Category

📝 Abstract
Recent advances in foundation models, such as LLMs, have revolutionized conversational AI. Chatbots are increasingly being developed by customizing LLMs on specific conversational datasets. However, mitigating toxicity during this customization, especially when dealing with untrusted training data, remains a significant challenge. To address this, we introduce TuneShield, a defense framework designed to mitigate toxicity during chatbot fine-tuning while preserving conversational quality. TuneShield leverages LLM-based toxicity classification, utilizing the instruction-following capabilities and safety alignment of LLMs to effectively identify toxic samples, outperforming industry API services. TuneShield generates synthetic conversation samples, termed 'healing data', based on the identified toxic samples, using them to mitigate toxicity while reinforcing desirable behavior during fine-tuning. It performs an alignment process to further nudge the chatbot towards producing desired responses. Our findings show that TuneShield effectively mitigates toxicity injection attacks while preserving conversational quality, even when the toxicity classifiers are imperfect or biased. TuneShield proves to be resilient against adaptive adversarial and jailbreak attacks. Additionally, TuneShield demonstrates effectiveness in mitigating adaptive toxicity injection attacks during dialog-based learning (DBL).
Problem

Research questions and friction points this paper is trying to address.

Mitigating toxicity in chatbots during fine-tuning on untrusted data
Identifying toxic samples using LLM-based classification effectively
Generating synthetic healing data to reinforce desirable chatbot behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based toxicity classification for toxic sample identification
Generates synthetic healing data to mitigate toxicity
Alignment process nudges chatbot towards desired responses
🔎 Similar Papers
No similar papers found.