Combating Toxic Language: A Review of LLM-Based Strategies for Software Engineering

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the risk of large language models (LLMs) generating toxic content—such as discriminatory or aggressive text—in software engineering (SE) contexts. It presents the first systematic survey of toxicity detection and mitigation methods specifically for SE. The authors propose a domain-specific toxicity assessment framework comprising: (1) SE-oriented corpus preprocessing; (2) multi-dimensional human annotation; (3) a hybrid detection model integrating prompt engineering and fine-tuning; and (4) an LLM-driven rewriting technique for toxicity mitigation. Ablation experiments demonstrate that LLM-based rewriting reduces average text toxicity by 37.2%. The study identifies critical gaps in existing approaches—particularly in modeling SE-specific contextual semantics and enabling fine-grained toxicity attribution—and provides both theoretical foundations and practical guidelines for integrating responsible AI practices into software development workflows.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become integral to software engineering (SE), where they are increasingly used in development workflows. However, their widespread use raises concerns about the presence and propagation of toxic language--harmful or offensive content that can foster exclusionary environments. This paper provides a comprehensive review of recent research on toxicity detection and mitigation, focusing on both SE-specific and general-purpose datasets. We examine annotation and preprocessing techniques, assess detection methodologies, and evaluate mitigation strategies, particularly those leveraging LLMs. Additionally, we conduct an ablation study demonstrating the effectiveness of LLM-based rewriting for reducing toxicity. By synthesizing existing work and identifying open challenges, this review highlights key areas for future research to ensure the responsible deployment of LLMs in SE and beyond.
Problem

Research questions and friction points this paper is trying to address.

Detecting toxic language in software engineering workflows
Mitigating harmful content using LLM-based strategies
Evaluating toxicity reduction through LLM rewriting techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based toxicity detection methodologies
LLM-driven toxic language mitigation
Ablation study on LLM rewriting effectiveness
🔎 Similar Papers
No similar papers found.