🤖 AI Summary
Large multimodal models (LMMs) exhibit systematic perceptual deficiencies in detecting subtle visual differences, posing risks in safety-critical applications.
Method: We introduce “LMM-JND” (Just-Noticeable Difference for LMMs), a novel metric quantifying the minimal detectable distortion for LMMs, and propose a standardized evaluation protocol aligned with human visual perception. Leveraging 12 distortion types, we construct VPA-JND—a large-scale benchmark of 489K image pairs—and evaluate leading LMMs including GPT-4o and InternVL2.5.
Contribution/Results: Experiments reveal that state-of-the-art LMMs significantly underperform humans on fundamental visual comparison tasks, exposing critical robustness gaps. Crucially, JND performance is strongly influenced by the architectural design of both vision and language backbones. This work establishes the first quantitative characterization of LMMs’ visual acuity limits, providing a reproducible benchmark and a new paradigm for perceptual capability assessment and model optimization.
📝 Abstract
Just noticeable difference (JND), the minimum change that the human visual system (HVS) can perceive, has been studied for decades. Although recent work has extended this line of research into machine vision, there has been a scarcity of studies systematically exploring its perceptual boundaries across multiple tasks and stimulus types, particularly in the current era of rapidly advancing large multimodal models (LMMs), where studying the multifaceted capabilities of models has become a mainstream focus. Moreover, the perceptual defects of LMMs are not investigated thoroughly, resulting in potential security issues and suboptimal response efficiency. In this paper, we take an initial attempt and demonstrate that there exist significant visual blind spots in current LMMs. To systemically quantify this characteristic, we propose a new concept, {f LMM-JND}, together with its determination pipeline. Targeting uncovering the behavior commonalities in HVS-aligned visual perception tasks, we delve into several LMM families and construct a large-scale dataset, named VPA-JND, which contains 21.5k reference images with over 489k stimuli across 12 distortion types, to facilitate LMM-JND studies. VPA-JND exposes areas where state-of-the-art LMMs, including GPT-4o and the InternVL2.5 series, struggle with basic comparison queries and fall significantly short of human-level visual performance. We further explore the effects of vision and language backbones and find a notable correlation between their design philosophy that may instruct the future refinement of LMMs for their visual acuity. Together, our research underscores the significance of LMM-JND as a unique perspective for studying LMMs, and predictable LMM-JND is crucial for security concerns. This work will be available at https://github.com/zijianchen98/LMM-JND.