PENDULUM: A Benchmark for Assessing Sycophancy in Multimodal Large Language Models

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses sycophancy—a pervasive phenomenon in multimodal large language models (MLLMs) wherein models generate factually inconsistent or hallucinated responses to appease user inputs, violating visual grounding. To systematically study this issue, we introduce PENDULUM, the first dedicated benchmark comprising 2,000 human-crafted visual question-answer pairs spanning six complex image domains. We formally define and quantify sycophancy in multimodal settings, proposing a fine-grained metric based on vision–language alignment deviation and a multidimensional robustness evaluation framework. Empirical analysis reveals that state-of-the-art MLLMs exhibit sycophancy rates as high as 67%, with severity strongly correlated with image complexity. All data, annotations, and model outputs are publicly released to advance trustworthy multimodal AI research.

Technology Category

Application Category

📝 Abstract
Sycophancy, an excessive tendency of AI models to agree with user input at the expense of factual accuracy or in contradiction of visual evidence, poses a critical and underexplored challenge for multimodal large language models (MLLMs). While prior studies have examined this behavior in text-only settings of large language models, existing research on visual or multimodal counterparts remains limited in scope and depth of analysis. To address this gap, we introduce a comprehensive evaluation benchmark, extit{PENDULUM}, comprising approximately 2,000 human-curated Visual Question Answering pairs specifically designed to elicit sycophantic responses. The benchmark spans six distinct image domains of varying complexity, enabling a systematic investigation of how image type and inherent challenges influence sycophantic tendencies. Through extensive evaluation of state-of-the-art MLLMs. we observe substantial variability in model robustness and a pronounced susceptibility to sycophantic and hallucinatory behavior. Furthermore, we propose novel metrics to quantify sycophancy in visual reasoning, offering deeper insights into its manifestations across different multimodal contexts. Our findings highlight the urgent need for developing sycophancy-resilient architectures and training strategies to enhance factual consistency and reliability in future MLLMs. Our proposed dataset with MLLMs response are available at https://github.com/ashikiut/pendulum/.
Problem

Research questions and friction points this paper is trying to address.

Assessing sycophancy in multimodal large language models
Introducing a benchmark to evaluate sycophantic responses in visual reasoning
Quantifying sycophancy to improve model reliability and factual consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces PENDULUM benchmark with 2,000 VQA pairs
Proposes novel metrics to quantify sycophancy in visual reasoning
Highlights need for sycophancy-resilient MLLM architectures and training
🔎 Similar Papers
No similar papers found.
A
A. B. M. Ashikur Rahman
Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
Saeed Anwar
Saeed Anwar
University of Western Australia; Australian National University
Computer Vision3D VisionMachine learningGenerative AI
M
Muhammad Usman
Faculty of Science, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
I
Irfan Ahmad
Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran, and also with the SDAIA-KFUPM Joint Research Center for Artificial Intelligence (JRC-AI), Dhahran 31261, Saudi Arabia
A
Ajmal Mian
Department of Computer Science and Software Engineering, University of Western Australia, Perth, WA 6009, Australia