🤖 AI Summary
Large vision-language models (LVLMs) suffer from semantic-level hallucinations triggered by specific visual concepts (e.g., “slippery”, “foggy”), exposing a critical robustness bottleneck.
Method: We propose the first semantic-level adversarial exploration paradigm to systematically discover such sensitive semantic concepts in images. Our approach introduces a collaborative evolutionary framework integrating large language models (LLMs) and text-to-image (T2I) diffusion models: LLMs drive semantic crossover and mutation, while T2I models enable closed-loop image generation, jointly optimizing for sensitivity—moving beyond conventional pixel-level adversarial attacks.
Contribution/Results: Evaluated on seven mainstream LVLMs across two multimodal task categories, our method efficiently identifies sensitive semantics with enhanced interpretability of model blind spots. It uncovers cross-model common vulnerabilities—e.g., consistent hallucination patterns under weather-related or surface-property concepts—providing actionable insights for targeted robustness training and mitigation strategies.
📝 Abstract
Adversarial attacks aim to generate malicious inputs that mislead deep models, but beyond causing model failure, they cannot provide certain interpretable information such as `` extit{What content in inputs make models more likely to fail?}'' However, this information is crucial for researchers to specifically improve model robustness. Recent research suggests that models may be particularly sensitive to certain semantics in visual inputs (such as ``wet,'' ``foggy''), making them prone to errors. Inspired by this, in this paper we conducted the first exploration on large vision-language models (LVLMs) and found that LVLMs indeed are susceptible to hallucinations and various errors when facing specific semantic concepts in images. To efficiently search for these sensitive concepts, we integrated large language models (LLMs) and text-to-image (T2I) models to propose a novel semantic evolution framework. Randomly initialized semantic concepts undergo LLM-based crossover and mutation operations to form image descriptions, which are then converted by T2I models into visual inputs for LVLMs. The task-specific performance of LVLMs on each input is quantified as fitness scores for the involved semantics and serves as reward signals to further guide LLMs in exploring concepts that induce LVLMs. Extensive experiments on seven mainstream LVLMs and two multimodal tasks demonstrate the effectiveness of our method. Additionally, we provide interesting findings about the sensitive semantics of LVLMs, aiming to inspire further in-depth research.