Automated Harmfulness Testing for Code Large Language Models

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses content safety risks arising from latent harmful elements—such as malicious identifiers and stealthy program transformations—in code generated by Code Large Language Models (LLMs). To this end, we propose CHT, a coverage-guided adversarial testing framework. Methodologically, we (1) systematically construct 32 code-specific harmful program transformations and keyword injection strategies; (2) design a coverage-driven metamorphic testing framework; and (3) introduce a two-stage pre-filtering mechanism for harmful content detection. Experimental results demonstrate that built-in safety filters of mainstream Code LLMs—including GPT-4o-mini—are readily bypassed by our attacks; our detection mechanism improves filter efficacy by 483.76%; and the CHT framework is open-sourced to support reproducible and extensible assessment of code harmfulness.

Technology Category

Application Category

📝 Abstract
Generative AI systems powered by Large Language Models (LLMs) usually use content moderation to prevent harmful content spread. To evaluate the robustness of content moderation, several metamorphic testing techniques have been proposed to test content moderation software. However, these techniques mainly focus on general users (e.g., text and image generation). Meanwhile, a recent study shows that developers consider using harmful keywords when naming software artifacts to be an unethical behavior. Exposure to harmful content in software artifacts can negatively impact the mental health of developers, making content moderation for Code Large Language Models (Code LLMs) essential. We conduct a preliminary study on program transformations that can be misused to introduce harmful content into auto-generated code, identifying 32 such transformations. To address this, we propose CHT, a coverage-guided harmfulness testing framework that generates prompts using diverse transformations and harmful keywords injected into benign programs. CHT evaluates output damage to assess potential risks in LLM-generated explanations and code. Our evaluation of four Code LLMs and GPT-4o-mini reveals that content moderation in LLM-based code generation is easily bypassed. To enhance moderation, we propose a two-phase approach that first detects harmful content before generating output, improving moderation effectiveness by 483.76%.
Problem

Research questions and friction points this paper is trying to address.

Testing harmful content moderation in Code LLMs
Identifying unethical harmful keywords in software artifacts
Improving robustness of AI-generated code content moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coverage-guided harmfulness testing framework (CHT)
Diverse transformations and harmful keywords injection
Two-phase harmful content detection before generation
🔎 Similar Papers
No similar papers found.
H
Honghao Tan
Concordia University, Montreal, Canada
H
Haibo Wang
Concordia University, Montreal, Canada
D
Diany Pressato
Concordia University, Montreal, Canada
Y
Yisen Xu
Concordia University, Montreal, Canada
Shin Hwei Tan
Shin Hwei Tan
Associate Professor, Concordia University
Automated Program RepairSoftware TestingGenetic ImprovementOpen-source Software Development