🤖 AI Summary
This work addresses the vulnerability of multimodal large language models (MLLMs) in handling hidden-pattern visual illusions, where they often fail to perceive content readily visible to humans due to a bias toward high-frequency visual features. The study presents the first systematic analysis of this failure mechanism and introduces SMSP (Scale-aware Multi-scale Perception), a plug-and-play strategy that leverages frequency-domain analysis and multi-scale image processing to suppress high-frequency background interference, thereby aligning model inputs more closely with human visual perception. To facilitate evaluation, the authors construct IlluChar, a large-scale visual illusion benchmark dataset, and demonstrate SMSP’s effectiveness across multiple mainstream MLLMs—e.g., boosting the accuracy of Qwen3-VL-8B-Instruct from 13.0% to 84.0%.
📝 Abstract
Recent works have shown that Multimodal Large Language Models (MLLMs) are highly vulnerable to hidden-pattern visual illusions, where the hidden content is imperceptible to models but obvious to humans. This deficiency highlights a perceptual misalignment between current MLLMs and humans, and also introduces potential safety concerns. To systematically investigate this failure, we introduce IlluChar, a comprehensive and challenging illusion dataset, and uncover a key underlying mechanism for the models' failure: high-frequency attention bias, where the models are easily distracted by high-frequency background textures in illusion images, causing them to overlook hidden patterns. To address the issue, we propose the Strategy of Multi-Scale Perception (SMSP), a plug-and-play framework that aligns with human visual perceptual strategies. By suppressing distracting high-frequency backgrounds, SMSP generates images closer to human perception. Our experiments demonstrate that SMSP significantly improves the performance of all evaluated MLLMs on illusion images, for instance, increasing the accuracy of Qwen3-VL-8B-Instruct from 13.0% to 84.0%. Our work provides novel insights into MLLMs' visual perception, and offers a practical and robust solution to enhance it. Our code is publicly available at https://github.com/Tujz2023/SMSP.