Harnessing Hyperbolic Geometry for Harmful Prompt Detection and Sanitization

πŸ“… 2026-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of vision-language models to adversarial prompt attacks, noting that existing defenses are either easily circumvented or computationally expensive and susceptible to embedding-layer attacks. To overcome these limitations, the study introduces hyperbolic geometry into prompt security for the first time, proposing two lightweight components: HyPE and HyPS. HyPE leverages the geometric properties of hyperbolic space to model benign prompts and detect anomalies, while HyPS employs interpretable attribution to identify harmful tokens and perform semantics-preserving text sanitization. The resulting framework achieves strong robustness and interpretability, significantly outperforming current methods in detection accuracy and resilience against attacks across multiple datasets and adversarial settings.
πŸ“ Abstract
Vision-Language Models (VLMs) have become essential for tasks such as image synthesis, captioning, and retrieval by aligning textual and visual information in a shared embedding space. Yet, this flexibility also makes them vulnerable to malicious prompts designed to produce unsafe content, raising critical safety concerns. Existing defenses either rely on blacklist filters, which are easily circumvented, or on heavy classifier-based systems, both of which are costly and fragile under embedding-level attacks. We address these challenges with two complementary components: Hyperbolic Prompt Espial (HyPE) and Hyperbolic Prompt Sanitization (HyPS). HyPE is a lightweight anomaly detector that leverages the structured geometry of hyperbolic space to model benign prompts and detect harmful ones as outliers. HyPS builds on this detection by applying explainable attribution methods to identify and selectively modify harmful words, neutralizing unsafe intent while preserving the original semantics of user prompts. Through extensive experiments across multiple datasets and adversarial scenarios, we prove that our framework consistently outperforms prior defenses in both detection accuracy and robustness. Together, HyPE and HyPS offer an efficient, interpretable, and resilient approach to safeguarding VLMs against malicious prompt misuse.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
harmful prompt detection
prompt sanitization
embedding-level attacks
model safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

hyperbolic geometry
harmful prompt detection
prompt sanitization
vision-language models
anomaly detection
πŸ”Ž Similar Papers
2024-08-21International Conference on Automated Software EngineeringCitations: 10