How Toxic Can You Get? Search-based Toxicity Testing for Large Language Models

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the persistent risk of toxic response generation in post-alignment large language models (LLMs). To systematically uncover such vulnerabilities, we propose EvoTox—a dual-LLM collaborative evolutionary framework for automated toxicity stress testing. Methodologically, EvoTox introduces an iterative co-evolutionary strategy wherein a dedicated Prompt Generator dynamically crafts adversarial prompts through strategic interaction with the target LLM, overcoming limitations of static prompt sets and random search. Toxicity assessment is performed by an automated classifier (Oracle), validated via human evaluation. Experiments on 7–13B-parameter LLMs demonstrate that EvoTox detects significantly higher toxicity levels than baseline methods—achieving effect sizes up to 1.0—while incurring only a 22%–35% overhead in computational cost. Its performance substantially surpasses both random search and conventional adversarial attack approaches, establishing a new state of the art in automated LLM toxicity evaluation.

Technology Category

Application Category

📝 Abstract
Language is a deep-rooted means of perpetration of stereotypes and discrimination. Large Language Models (LLMs), now a pervasive technology in our everyday lives, can cause extensive harm when prone to generating toxic responses. The standard way to address this issue is to align the LLM, which, however, dampens the issue without constituting a definitive solution. Therefore, testing LLM even after alignment efforts remains crucial for detecting any residual deviations with respect to ethical standards. We present EvoTox, an automated testing framework for LLMs' inclination to toxicity, providing a way to quantitatively assess how much LLMs can be pushed towards toxic responses even in the presence of alignment. The framework adopts an iterative evolution strategy that exploits the interplay between two LLMs, the System Under Test (SUT) and the Prompt Generator steering SUT responses toward higher toxicity. The toxicity level is assessed by an automated oracle based on an existing toxicity classifier. We conduct a quantitative and qualitative empirical evaluation using four state-of-the-art LLMs as evaluation subjects having increasing complexity (7-13 billion parameters). Our quantitative evaluation assesses the cost-effectiveness of four alternative versions of EvoTox against existing baseline methods, based on random search, curated datasets of toxic prompts, and adversarial attacks. Our qualitative assessment engages human evaluators to rate the fluency of the generated prompts and the perceived toxicity of the responses collected during the testing sessions. Results indicate that the effectiveness, in terms of detected toxicity level, is significantly higher than the selected baseline methods (effect size up to 1.0 against random search and up to 0.99 against adversarial attacks). Furthermore, EvoTox yields a limited cost overhead (from 22% to 35% on average).
Problem

Research questions and friction points this paper is trying to address.

Harmful Speech
Large Language Models
Dialogue Safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

EvoTox
Large Language Models
Harmful Output Detection
🔎 Similar Papers
No similar papers found.
Simone Corbo
Simone Corbo
Politecnico di Milano
L
Luca Bancale
Department of Electronics, Information and Bioengineering (DEIB) of Politecnico di Milano (PoliMI) University
V
Valeria De Gennaro
Department of Electronics, Information and Bioengineering (DEIB) of Politecnico di Milano (PoliMI) University
Livia Lestingi
Livia Lestingi
Politecnico di Milano
Software EngineeringFormal MethodsCyber-Physical Systems
Vincenzo Scotti
Vincenzo Scotti
Karlsruhe Institute of Technology
Artificial IntelligenceNatural Language ProcessingDeep Learning
Matteo Camilli
Matteo Camilli
Associate professor, Politecnico di Milano
software engineeringsoftware verificationsoftware testing