🤖 AI Summary
Multimodal large language models (MLLMs) are vulnerable to adversarial prompts and jailbreaking attacks, posing significant safety risks. Method: This paper proposes QGuard—a zero-shot, query-driven, dynamic safety guard that requires no model fine-tuning. Leveraging white-box analysis of inputs, QGuard dynamically generates diverse, interpretable guard questions in real time to perform immediate risk detection on both textual and multimodal inputs. Contribution/Results: QGuard introduces the first “query-based zero-shot multimodal safety guarding” paradigm, integrating dynamic guard-question generation, diversity enhancement, and interpretability-aware analysis. Extensive experiments demonstrate that QGuard achieves state-of-the-art defense performance across mainstream harmful-content and jailbreaking benchmarks, with low false-positive rates, minimal computational overhead, strong generalization across unseen attack types, and seamless integration into production LLM serving systems.
📝 Abstract
The recent advancements in Large Language Models(LLMs) have had a significant impact on a wide range of fields, from general domains to specialized areas. However, these advancements have also significantly increased the potential for malicious users to exploit harmful and jailbreak prompts for malicious attacks. Although there have been many efforts to prevent harmful prompts and jailbreak prompts, protecting LLMs from such malicious attacks remains an important and challenging task. In this paper, we propose QGuard, a simple yet effective safety guard method, that utilizes question prompting to block harmful prompts in a zero-shot manner. Our method can defend LLMs not only from text-based harmful prompts but also from multi-modal harmful prompt attacks. Moreover, by diversifying and modifying guard questions, our approach remains robust against the latest harmful prompts without fine-tuning. Experimental results show that our model performs competitively on both text-only and multi-modal harmful datasets. Additionally, by providing an analysis of question prompting, we enable a white-box analysis of user inputs. We believe our method provides valuable insights for real-world LLM services in mitigating security risks associated with harmful prompts.