DeepSeek on a Trip: Inducing Targeted Visual Hallucinations via Representation Vulnerabilities

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes representation fragility in the image embedding layer of DeepSeek Janus, a multimodal large language model (MLLM), and proposes an embedding-space adversarial optimization method to induce targeted visual hallucinations via controllable image representations—thereby compromising vision-language alignment reliability. We establish the first embedding-level security evaluation paradigm for open-source MLLMs, designing a multi-prompt hallucination detection framework based on LLaMA-3.1 8B Instruct, and validating cross-dataset transferability of attacks on COCO, DALL-E 3, and SVIT. Structural similarity (SSIM) constraints ensure perturbation imperceptibility. Experiments show hallucination rates up to 98.0% under open-ended QA (SSIM > 0.88), with significant vulnerability observed across both 1B and 7B model variants. This work provides a novel perspective on visual-modality security in MLLMs and introduces a reproducible benchmark for future research.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) represent the cutting edge of AI technology, with DeepSeek models emerging as a leading open-source alternative offering competitive performance to closed-source systems. While these models demonstrate remarkable capabilities, their vision-language integration mechanisms introduce specific vulnerabilities. We implement an adapted embedding manipulation attack on DeepSeek Janus that induces targeted visual hallucinations through systematic optimization of image embeddings. Through extensive experimentation across COCO, DALL-E 3, and SVIT datasets, we achieve hallucination rates of up to 98.0% while maintaining high visual fidelity (SSIM>0.88) of the manipulated images on open-ended questions. Our analysis demonstrates that both 1B and 7B variants of DeepSeek Janus are susceptible to these attacks, with closed-form evaluation showing consistently higher hallucination rates compared to open-ended questioning. We introduce a novel multi-prompt hallucination detection framework using LLaMA-3.1 8B Instruct for robust evaluation. The implications of these findings are particularly concerning given DeepSeek's open-source nature and widespread deployment potential. This research emphasizes the critical need for embedding-level security measures in MLLM deployment pipelines and contributes to the broader discussion of responsible AI implementation.
Problem

Research questions and friction points this paper is trying to address.

Targeted visual hallucinations in MLLMs
Embedding manipulation attack on DeepSeek
Vulnerabilities in vision-language integration mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted embedding manipulation attack
Systematic optimization of image embeddings
Novel multi-prompt hallucination detection framework
🔎 Similar Papers
No similar papers found.