Evolving Contextual Safety in Multi-Modal Large Language Models via Inference-Time Self-Reflective Memory

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Multi-modal Large Language Models (MLLMs) have achieved remarkable performance across a wide range of visual reasoning tasks, yet their vulnerability to safety risks remains a pressing concern. While prior research primarily focuses on jailbreak defenses that detect and refuse explicitly unsafe inputs, such approaches often overlook contextual safety, which requires models to distinguish subtle contextual differences between scenarios that may appear similar but diverge significantly in safety intent. In this work, we present MM-SafetyBench++, a carefully curated benchmark designed for contextual safety evaluation. Specifically, for each unsafe image-text pair, we construct a corresponding safe counterpart through minimal modifications that flip the user intent while preserving the underlying contextual meaning, enabling controlled evaluation of whether models can adapt their safety behaviors based on contextual understanding. Further, we introduce EchoSafe, a training-free framework that maintains a self-reflective memory bank to accumulate and retrieve safety insights from prior interactions. By integrating relevant past experiences into current prompts, EchoSafe enables context-aware reasoning and continual evolution of safety behavior during inference. Extensive experiments on various multi-modal safety benchmarks demonstrate that EchoSafe consistently achieves superior performance, establishing a strong baseline for advancing contextual safety in MLLMs. All benchmark data and code are available at https://echosafe-mllm.github.io.
🔎 Similar Papers
No similar papers found.
Ce Zhang
Ce Zhang
PhD Student, Carnegie Mellon University
Machine LearningComputer Vision
J
Jinxi He
Robotics Institute, Carnegie Mellon University
J
Junyi He
Robotics Institute, Carnegie Mellon University
Katia Sycara
Katia Sycara
Professor School of Computer Science, Carnegie Mellon University
Artificial IntelligenceMulti-Robot SystemsHuman Robot InteractionMulti-Agent SystemsSemantic Web
Y
Yaqi Xie
Robotics Institute, Carnegie Mellon University