Understanding LLM Behavior When Encountering User-Supplied Harmful Content in Harmless Tasks

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical yet previously overlooked ethical vulnerability in mainstream large language models: even when performing ostensibly harmless tasks, these models may still process and generate harmful outputs if the user input contains toxic content, thereby violating principles of ethical alignment. The work systematically identifies and formally defines this content-level (as opposed to task-level) ethical risk, introducing a comprehensive dataset comprising 1,357 multi-category harmful knowledge instances paired with nine compliant tasks to evaluate behavioral responses across nine leading models. Experimental results reveal widespread content-level ethical failures among state-of-the-art models—including GPT-5.2 and Gemini-3-Pro—with “violence/graphic” content combined with “translation” tasks proving most prone to eliciting harmful outputs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly trained to align with human values, primarily focusing on task level, i.e., refusing to execute directly harmful tasks. However, a subtle yet crucial content-level ethical question is often overlooked: when performing a seemingly benign task, will LLMs -- like morally conscious human beings -- refuse to proceed when encountering harmful content in user-provided material? In this study, we aim to understand this content-level ethical question and systematically evaluate its implications for mainstream LLMs. We first construct a harmful knowledge dataset (i.e., non-compliant with OpenAI's usage policy) to serve as the user-supplied harmful content, with 1,357 entries across ten harmful categories. We then design nine harmless tasks (i.e., compliant with OpenAI's usage policy) to simulate the real-world benign tasks, grouped into three categories according to the extent of user-supplied content required: extensive, moderate, and limited. Leveraging the harmful knowledge dataset and the set of harmless tasks, we evaluate how nine LLMs behave when exposed to user-supplied harmful content during the execution of benign tasks, and further examine how the dynamics between harmful knowledge categories and tasks affect different LLMs. Our results show that current LLMs, even the latest GPT-5.2 and Gemini-3-Pro, often fail to uphold human-aligned ethics by continuing to process harmful content in harmless tasks. Furthermore, external knowledge from the ``Violence/Graphic'' category and the ``Translation'' task is more likely to elicit harmful responses from LLMs. We also conduct extensive ablation studies to investigate potential factors affecting this novel misuse vulnerability. We hope that our study could inspire enhanced safety measures among stakeholders to mitigate this overlooked content-level ethical risk.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
harmful content
content-level ethics
alignment
safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

content-level ethics
harmful user-supplied content
LLM safety evaluation
ethical alignment
misuse vulnerability
🔎 Similar Papers
No similar papers found.