🤖 AI Summary
To address safety risks arising from toxic content generation by large language models (LLMs), this paper proposes a prototype-guided implicit knowledge editing framework. Methodologically, it introduces (1) the first construction of adversarial hard negative samples—semantically similar yet probabilistically proximate to toxic outputs—to enable attribute-aware robust contrastive optimization; and (2) a prototype-driven contrastive perplexity objective that jointly leverages adversarial rewriting and fine-grained perplexity modeling, achieving controllable text generation without explicit fine-tuning. Experimental results demonstrate that the method substantially reduces toxic output (average reduction of 42.6%) while preserving strong generalization across downstream tasks—including commonsense reasoning and reading comprehension—with negligible performance degradation (<0.8%). Thus, it effectively reconciles safety assurance with functional integrity in LLM deployment.
📝 Abstract
The generation of toxic content by large language models (LLMs) remains a critical challenge for the safe deployment of language technology. We propose a novel framework for implicit knowledge editing and controlled text generation by fine-tuning LLMs with a prototype-based contrastive perplexity objective. Central to our method is the construction of hard negatives - toxic outputs that are generated through adversarial paraphrasing to be semantically similar and model probability to their non-toxic counterparts. By training on these challenging and realistic pairs, our approach ensures robust and stable contrastive optimization. Experimental results in the domain of detoxification demonstrate that our method significantly reduces toxic generation while maintaining strong performance on downstream tasks such as commonsense reasoning and reading comprehension. Our findings highlight the effectiveness of exploiting hard negatives for attribute-aware fine-tuning.