Just Noticeable Difference for Large Multimodal Models

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) exhibit systematic perceptual deficiencies in detecting subtle visual differences, posing risks in safety-critical applications. Method: We introduce “LMM-JND” (Just-Noticeable Difference for LMMs), a novel metric quantifying the minimal detectable distortion for LMMs, and propose a standardized evaluation protocol aligned with human visual perception. Leveraging 12 distortion types, we construct VPA-JND—a large-scale benchmark of 489K image pairs—and evaluate leading LMMs including GPT-4o and InternVL2.5. Contribution/Results: Experiments reveal that state-of-the-art LMMs significantly underperform humans on fundamental visual comparison tasks, exposing critical robustness gaps. Crucially, JND performance is strongly influenced by the architectural design of both vision and language backbones. This work establishes the first quantitative characterization of LMMs’ visual acuity limits, providing a reproducible benchmark and a new paradigm for perceptual capability assessment and model optimization.

Technology Category

Application Category

📝 Abstract
Just noticeable difference (JND), the minimum change that the human visual system (HVS) can perceive, has been studied for decades. Although recent work has extended this line of research into machine vision, there has been a scarcity of studies systematically exploring its perceptual boundaries across multiple tasks and stimulus types, particularly in the current era of rapidly advancing large multimodal models (LMMs), where studying the multifaceted capabilities of models has become a mainstream focus. Moreover, the perceptual defects of LMMs are not investigated thoroughly, resulting in potential security issues and suboptimal response efficiency. In this paper, we take an initial attempt and demonstrate that there exist significant visual blind spots in current LMMs. To systemically quantify this characteristic, we propose a new concept, {f LMM-JND}, together with its determination pipeline. Targeting uncovering the behavior commonalities in HVS-aligned visual perception tasks, we delve into several LMM families and construct a large-scale dataset, named VPA-JND, which contains 21.5k reference images with over 489k stimuli across 12 distortion types, to facilitate LMM-JND studies. VPA-JND exposes areas where state-of-the-art LMMs, including GPT-4o and the InternVL2.5 series, struggle with basic comparison queries and fall significantly short of human-level visual performance. We further explore the effects of vision and language backbones and find a notable correlation between their design philosophy that may instruct the future refinement of LMMs for their visual acuity. Together, our research underscores the significance of LMM-JND as a unique perspective for studying LMMs, and predictable LMM-JND is crucial for security concerns. This work will be available at https://github.com/zijianchen98/LMM-JND.
Problem

Research questions and friction points this paper is trying to address.

Exploring perceptual boundaries of large multimodal models (LMMs) across tasks and stimuli
Investigating visual blind spots and defects in current LMMs for security and efficiency
Quantifying LMM-JND to assess model performance gaps compared to human vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing LMM-JND for quantifying visual blind spots
Creating VPA-JND dataset with 489k stimuli
Analyzing vision-language backbone effects on LMMs
🔎 Similar Papers
No similar papers found.
Zijian Chen
Zijian Chen
Shanghai Jiao Tong University | Shanghai AI Laboratory
Image/Video Quality AssessmentLarge Multi-modal Models
Y
Yuan Tian
Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, Shanghai 200240, China and Shanghai AI Laboratory, Shanghai 200232, China
Yuze Sun
Yuze Sun
Tsinghua University, doc
Deep LearningAI for Earth
W
Wei Sun
School of Communication and Electronic Engineering, East China Normal University, Shanghai 200241, China
Z
Zicheng Zhang
Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, Shanghai 200240, China and Shanghai AI Laboratory, Shanghai 200232, China
Weisi Lin
Weisi Lin
President's Chair Professor in Computer Science, CCDS, Nanyang Technological Unversity
Perception-inspired signal modelingperceptual multimedia quality evaluationvideo compressionimage processing & analysis
Guangtao Zhai
Guangtao Zhai
Professor, IEEE Fellow, Shanghai Jiao Tong University
Multimedia Signal ProcessingVisual Quality AssessmentQoEAI EvaluationDisplays
Wenjun Zhang
Wenjun Zhang
City University of Hong Kong
Thin film technologynanomaterials and nanodevices