🤖 AI Summary
Current multimodal foundation models exhibit insufficient robustness against typographic attacks—such as font perturbations—in text-embedded image scenarios, while the absence of large-scale, diverse real-world attack benchmarks hinders systematic investigation. This paper introduces SCAM, the first large-scale benchmark specifically designed for realistic font-based attacks, comprising 1,162 images spanning 100 classes, including both handwritten and synthetically generated samples. We establish a unified evaluation framework and conduct the first systematic analysis revealing that the visual encoder—not the LLM—is the primary vulnerability source in LVLMs under font attacks. We empirically validate high consistency between synthetic and authentic handwritten attacks, and further demonstrate that scaling up the LLM backbone significantly mitigates this fragility. Extensive experiments show substantial performance degradation across mainstream VLMs on SCAM. The dataset and evaluation code are fully open-sourced to advance research on robust multimodal AI.
📝 Abstract
Typographic attacks exploit the interplay between text and visual content in multimodal foundation models, causing misclassifications when misleading text is embedded within images. However, existing datasets are limited in size and diversity, making it difficult to study such vulnerabilities. In this paper, we introduce SCAM, the largest and most diverse dataset of real-world typographic attack images to date, containing 1,162 images across hundreds of object categories and attack words. Through extensive benchmarking of Vision-Language Models (VLMs) on SCAM, we demonstrate that typographic attacks significantly degrade performance, and identify that training data and model architecture influence the susceptibility to these attacks. Our findings reveal that typographic attacks persist in state-of-the-art Large Vision-Language Models (LVLMs) due to the choice of their vision encoder, though larger Large Language Models (LLMs) backbones help mitigate their vulnerability. Additionally, we demonstrate that synthetic attacks closely resemble real-world (handwritten) attacks, validating their use in research. Our work provides a comprehensive resource and empirical insights to facilitate future research toward robust and trustworthy multimodal AI systems. We publicly release the datasets introduced in this paper under https://huggingface.co/datasets/BLISS-e-V/SCAM, along with the code for evaluations at https://github.com/Bliss-e-V/SCAM.