StealthGraph: Exposing Domain-Specific Risks in LLMs through Knowledge-Graph-Guided Harmful Prompt Generation

πŸ“… 2026-01-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the scarcity of implicitly harmful prompts in domain-specific large language model safety evaluations, a limitation exacerbated by the inability of manually crafted prompts to reflect real-world threats. To overcome this, the authors propose a novel approach that integrates knowledge graph–guided generation with dual-path obfuscation rewriting. Specifically, domain-relevant prompts are first generated using a knowledge graph, then transformed into highly implicit harmful forms through two complementary strategies: direct rewriting and context-aware enhancement. This methodology enables the first systematic construction of a red-teaming dataset that simultaneously exhibits strong domain relevance and high levels of implicitness. Empirical results demonstrate that the resulting dataset significantly improves the effectiveness of safety evaluations for domain-specific models. The authors publicly release both the code and the dataset to support future research in this area.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) are increasingly applied in specialized domains such as finance and healthcare, where they introduce unique safety risks. Domain-specific datasets of harmful prompts remain scarce and still largely rely on manual construction; public datasets mainly focus on explicit harmful prompts, which modern LLM defenses can often detect and refuse. In contrast, implicit harmful prompts-expressed through indirect domain knowledge-are harder to detect and better reflect real-world threats. We identify two challenges: transforming domain knowledge into actionable constraints and increasing the implicitness of generated harmful prompts. To address them, we propose an end-to-end framework that first performs knowledge-graph-guided harmful prompt generation to systematically produce domain-relevant prompts, and then applies dual-path obfuscation rewriting to convert explicit harmful prompts into implicit variants via direct and context-enhanced rewriting. This framework yields high-quality datasets combining strong domain relevance with implicitness, enabling more realistic red-teaming and advancing LLM safety research. We release our code and datasets at GitHub.
Problem

Research questions and friction points this paper is trying to address.

domain-specific risks
implicit harmful prompts
large language models
knowledge graph
LLM safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

knowledge-graph-guided generation
implicit harmful prompts
dual-path obfuscation
domain-specific red-teaming
LLM safety
πŸ”Ž Similar Papers
No similar papers found.