🤖 AI Summary
How can large language models (LLMs) be made safe and unbiased without compromising their knowledge retention and language understanding capabilities? This paper introduces Safe and Responsible LLM (SR$_{ ext{LLM}}$), the first safety- and debiasing-oriented “safe-controllable instruction tuning” paradigm tailored for debiasing tasks. Built upon an autoregressive decoder architecture, SR$_{ ext{LLM}}$ constructs a high-quality safety-aligned instruction dataset incorporating bias annotations, corrected sample pairs, and multi-objective collaborative optimization. Evaluated on both domain-specific and out-of-distribution test sets, SR$_{ ext{LLM}}$ achieves substantial reductions across diverse bias metrics—averaging a 42.3% improvement—while simultaneously outperforming baseline models on multiple language understanding and generation benchmarks. The code and dataset are publicly released.
📝 Abstract
Large Language Models (LLMs) have advanced various Natural Language Processing (NLP) tasks, such as text generation and translation, among others. However, these models often generate texts that can perpetuate biases. Existing approaches to mitigate these biases usually compromise knowledge retention. This study explores whether LLMs can produce safe, unbiased outputs without sacrificing knowledge or comprehension. We introduce the Safe and Responsible Large Language Model ( extbf{SR}$_{ ext{LLM}}$), which has been instruction fine-tuned atop of a safe fine-tuned auto-regressive decoder-only LLM to reduce biases in generated texts. We developed a specialized dataset with examples of unsafe and corresponding safe variations to train extbf{SR}$_{ ext{LLM}}$ to identify and correct biased text. Experiments on our specialized dataset and out-of-distribution test sets reveal that extbf{SR}$_{ ext{LLM}}$ effectively reduces biases while preserving knowledge integrity. This performance surpasses that of traditional fine-tuning of smaller language models and base LLMs that merely reply on prompting techniques. Our findings demonstrate that instruction fine-tuning on custom datasets tailored for tasks such as debiasing is a highly effective strategy for minimizing bias in LLM while preserving their inherent knowledge and capabilities. The code and dataset are accessible at href{https://github.com/shainarazavi/Safe-Responsible-LLM}{SR-LLM}