Efficient Detection of Toxic Prompts in Large Language Models

📅 2024-08-21
🏛️ International Conference on Automated Software Engineering
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
To address the security risk of jailbreaking attacks eliciting harmful responses from large language models (LLMs), this paper proposes ToxicDetector, a lightweight gray-box detection method. It introduces a novel three-stage paradigm: (1) LLM-generated toxic concept prompts, (2) embedding-based semantic feature modeling, and (3) lightweight MLP classification—balancing interpretability and efficiency. Evaluated across multiple LLaMA variants, Gemma-2, and diverse benchmark datasets, ToxicDetector achieves 96.39% detection accuracy and only a 2.00% false positive rate, with per-prompt latency as low as 78 ms. Its core contributions are threefold: (i) the first integration of LLM-driven toxic concept prompting into toxicity detection; (ii) construction of highly discriminative semantic representations via prompt-embedding alignment; and (iii) a gray-box architecture that jointly optimizes detection accuracy, robustness against adversarial prompts, and real-time inference—establishing a new paradigm for deployment-ready content safety protection.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) like ChatGPT and Gemini have significantly advanced natural language processing, enabling various applications such as chatbots and automated content generation. However, these models can be exploited by malicious individuals who craft toxic prompts to elicit harmful or unethical responses. These individuals often employ jailbreaking techniques to bypass safety mechanisms, highlighting the need for robust toxic prompt detection methods. Existing detection techniques, both blackbox and whitebox, face challenges related to the diversity of toxic prompts, scalability, and computational efficiency. In response, we propose ToxicDetector, a lightweight greybox method designed to efficiently detect toxic prompts in LLMs. ToxicDetector leverages LLMs to create toxic concept prompts, uses embedding vectors to form feature vectors, and employs a Multi-Layer Perceptron (MLP) classifier for prompt classification. Our evaluation on various versions of the LLama models, Gemma-2, and multiple datasets demonstrates that ToxicDetector achieves a high accuracy of 96.39% and a low false positive rate of 2.00%, outperforming state-of-the-art methods. Additionally, ToxicDetector’s processing time of 0.0780 seconds per prompt makes it highly suitable for real-time applications. ToxicDetector achieves high accuracy, efficiency, and scalability, making it a practical method for toxic prompt detection in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Detecting toxic prompts that bypass LLM safety mechanisms
Addressing scalability and efficiency in toxic prompt detection
Improving real-time detection accuracy for harmful user inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Greybox method using toxic concept prompts
Embedding vectors for feature extraction
MLP classifier for efficient detection
🔎 Similar Papers
No similar papers found.
Y
Yi Liu
Nanyang Technological University, Singapore
J
Junzhe Yu
ShanghaiTech University, Shanghai, China
Huijia Sun
Huijia Sun
ShanghaiTech University
Autonomous Driving Systems
L
Ling Shi
Nanyang Technological University, Singapore
Gelei Deng
Gelei Deng
Nanyang Technological University
CybersecuritySystem securityRobotics SecurityAI SecuritySoftware Testing
Y
Yuqi Chen
ShanghaiTech University, Shanghai, China
Y
Yang Liu
Nanyang Technological University, Singapore