Affordance Benchmark for MLLMs

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates multimodal large language models’ (MLLMs’) capacity to perceive environmental affordances—the action possibilities objects support—by systematically distinguishing and evaluating two fundamental dimensions: constitutive (inherent, static) and transformative (context-dependent, dynamic) affordances. To this end, we introduce A4Bench, the first dedicated benchmark comprising 1282 constitutive and 718 transformative affordance questions, incorporating dynamic contextual challenges—including cultural norms, temporal constraints, and individual differences—and grounded in human-annotated, structured question-answer pairs. Evaluation employs exact match (EM) and human consistency analysis across 17 state-of-the-art MLLMs (9 closed-source, 8 open-source). Results reveal a severe deficit in transformative affordance understanding (best EM: 18.05%), markedly below human performance (81.25–85.34%). This study fills a critical gap in environmental semantic understanding evaluation, delivering a reproducible, fine-grained, and ecologically valid assessment framework.

Technology Category

Application Category

📝 Abstract
Affordance theory posits that environments inherently offer action possibilities that shape perception and behavior. While Multimodal Large Language Models (MLLMs) excel in vision-language tasks, their ability to perceive affordance, which is crucial for intuitive and safe interactions, remains underexplored. To address this, we introduce A4Bench, a novel benchmark designed to evaluate the affordance perception abilities of MLLMs across two dimensions: 1) Constitutive Affordance}, assessing understanding of inherent object properties through 1,282 question-answer pairs spanning nine sub-disciplines, and 2) Transformative Affordance, probing dynamic and contextual nuances (e.g., misleading, time-dependent, cultural, or individual-specific affordance) with 718 challenging question-answer pairs. Evaluating 17 MLLMs (nine proprietary and eight open-source) against human performance, we find that proprietary models generally outperform open-source counterparts, but all exhibit limited capabilities, particularly in transformative affordance perception. Furthermore, even top-performing models, such as Gemini-2.0-Pro (18.05% overall exact match accuracy), significantly lag behind human performance (best: 85.34%, worst: 81.25%). These findings highlight critical gaps in environmental understanding of MLLMs and provide a foundation for advancing AI systems toward more robust, context-aware interactions. The dataset is available in https://github.com/JunyingWang959/A4Bench/.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' affordance perception in vision-language tasks
Assessing constitutive and transformative affordance understanding in MLLMs
Identifying gaps between MLLMs and human affordance perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces A4Bench benchmark for MLLMs
Evaluates constitutive and transformative affordance
Highlights gaps in MLLMs' environmental understanding