Characterising Toxicity in Generative Large Language Models

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the persistent risk of toxic content generation by large language models under specific prompts, a challenge inadequately mitigated by current alignment techniques such as reinforcement learning from human feedback (RLHF). The work systematically evaluates model toxicity across diverse prompt-induced scenarios and, for the first time, links toxic outputs to concrete linguistic features—namely lexical choices and syntactic structures—thereby uncovering the underlying mechanisms through which prompt formulation influences model behavior. By integrating Transformer decoder architectures, prompt engineering, linguistic analysis, and quantitative toxicity metrics, the research identifies key toxicity triggers and high-risk prompt patterns. These findings offer both theoretical insights and practical guidance for developing more robust content safety mechanisms in generative AI systems.

Technology Category

Application Category

📝 Abstract
In recent years, the advent of the attention mechanism has significantly advanced the field of natural language processing (NLP), revolutionizing text processing and text generation. This has come about through transformer-based decoder-only architectures, which have become ubiquitous in NLP due to their impressive text processing and generation capabilities. Despite these breakthroughs, language models (LMs) remain susceptible to generating undesired outputs: inappropriate, offensive, or otherwise harmful responses. We will collectively refer to these as ``toxic''outputs. Although methods like reinforcement learning from human feedback (RLHF) have been developed to align model outputs with human values, these safeguards can often be circumvented through carefully crafted prompts. Therefore, this paper examines the extent to which LLMs generate toxic content when prompted, as well as the linguistic factors -- both lexical and syntactic -- that influence the production of such outputs in generative models.
Problem

Research questions and friction points this paper is trying to address.

toxicity
large language models
undesired outputs
prompting
linguistic factors
Innovation

Methods, ideas, or system contributions that make the work stand out.

toxicity
large language models
linguistic factors
prompt-induced generation
syntactic influence
🔎 Similar Papers
No similar papers found.