🤖 AI Summary
This work presents the first systematic evaluation of the security disparities between multimodal large language models (MLLMs) and diffusion models in generating unsafe content and synthesizing deceptive images. Leveraging multi-benchmark datasets and adversarial experiments with both existing and retrained state-of-the-art detectors, the study demonstrates that MLLMs—due to their enhanced semantic comprehension of complex prompts—are more prone to producing harmful outputs. Furthermore, images generated by MLLMs exhibit strong evasion capabilities against current and augmented detection mechanisms. These findings reveal an intrinsic link between the advanced semantic reasoning of MLLMs and heightened security risks, underscoring their potential for real-world misuse through both explicit harmful generation and stealthy image forgery.
📝 Abstract
Recently, multimodal large language models (MLLMs) have emerged as a unified paradigm for language and image generation. Compared with diffusion models, MLLMs possess a much stronger capability for semantic understanding, enabling them to process more complex textual inputs and comprehend richer contextual meanings. However, this enhanced semantic ability may also introduce new and potentially greater safety risks. Taking diffusion models as a reference point, we systematically analyze and compare the safety risks of emerging MLLMs along two dimensions: unsafe content generation and fake image synthesis. Across multiple unsafe generation benchmark datasets, we observe that MLLMs tend to generate more unsafe images than diffusion models. This difference partly arises because diffusion models often fail to interpret abstract prompts, producing corrupted outputs, whereas MLLMs can comprehend these prompts and generate unsafe content. For current advanced fake image detectors, MLLM-generated images are also notably harder to identify. Even when detectors are retrained with MLLMs-specific data, they can still be bypassed by simply providing MLLMs with longer and more descriptive inputs. Our measurements indicate that the emerging safety risks of the cutting-edge generative paradigm, MLLMs, have not been sufficiently recognized, posing new challenges to real-world safety.