๐ค AI Summary
This work addresses the fragility of current vision-language models (VLMs) in making moral judgments under real-world conditions, where their ethical stances can be easily reversed by irrelevant textual or visual perturbations. Introducing the concept of โmoral robustnessโ for the first time, the study systematically evaluates VLM consistency across multiple moral domains and model scales using a multimodal, model-agnostic perturbation framework. The findings reveal a trade-off between instruction-following capability and moral susceptibility, and propose a lightweight, inference-time intervention strategy that effectively enhances the stability of moral judgments. Results demonstrate that even simple perturbations can significantly undermine the ethical consistency of VLMs, while the proposed intervention partially restores their moral alignment, underscoring the critical role of robustness in achieving reliable moral alignment.
๐ Abstract
Despite substantial efforts toward improving the moral alignment of Vision-Language Models (VLMs), it remains unclear whether their ethical judgments are stable in realistic settings. This work studies moral robustness in VLMs, defined as the ability to preserve moral judgments under textual and visual perturbations that do not alter the underlying moral context. We systematically probe VLMs with a diverse set of model-agnostic multimodal perturbations and find that their moral stances are highly fragile, frequently flipping under simple manipulations. Our analysis reveals systematic vulnerabilities across perturbation types, moral domains, and model scales, including a sycophancy trade-off where stronger instruction-following models are more susceptible to persuasion. We further show that lightweight inference-time interventions can partially restore moral stability. These results demonstrate that moral alignment alone is insufficient and that moral robustness is a necessary criterion for the responsible deployment of VLMs.