Exploring Diagnostic Prompting Approach for Multimodal LLM-based Visual Complexity Assessment: A Case Study of Amazon Search Result Pages

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing prompting methods—e.g., Gestalt-based prompts—for assessing visual complexity of Amazon Search Results Pages (SRPs) with multimodal large language models (MLLMs) exhibit substantial misalignment with human judgments. Method: We propose a structured diagnostic prompting framework, integrating decision-tree analysis, failure-case mining, and expert-annotated data for rigorous comparative evaluation. Contribution/Results: Our approach significantly improves reliability: F1-score rises from 0.031 to 0.297 (+858% relative gain), with Cohen’s κ = 0.071. Analysis reveals that MLLMs predominantly attend to superficial visual cues (e.g., badge clutter), whereas humans prioritize semantic content similarity and color intensity—highlighting both partial alignment and critical gaps in human-AI reasoning. This work pioneers the application of diagnostic prompting to SRP complexity modeling, establishing a novel, interpretable paradigm for multimodal visual complexity assessment.

Technology Category

Application Category

📝 Abstract
This study investigates whether diagnostic prompting can improve Multimodal Large Language Model (MLLM) reliability for visual complexity assessment of Amazon Search Results Pages (SRP). We compare diagnostic prompting with standard gestalt principles-based prompting using 200 Amazon SRP pages and human expert annotations. Diagnostic prompting showed notable improvements in predicting human complexity judgments, with F1-score increasing from 0.031 to 0.297 (+858% relative improvement), though absolute performance remains modest (Cohen's $κ$ = 0.071). The decision tree revealed that models prioritize visual design elements (badge clutter: 38.6% importance) while humans emphasize content similarity, suggesting partial alignment in reasoning patterns. Failure case analysis reveals persistent challenges in MLLM visual perception, particularly for product similarity and color intensity assessment. Our findings indicate that diagnostic prompting represents a promising initial step toward human-aligned MLLM-based evaluation, though failure cases with consistent human-MLLM disagreement require continued research and refinement in prompting approaches with larger ground truth datasets for reliable practical deployment.
Problem

Research questions and friction points this paper is trying to address.

Improving MLLM reliability for visual complexity assessment
Comparing diagnostic prompting with gestalt-based prompting methods
Addressing challenges in MLLM visual perception and human alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diagnostic prompting improves MLLM reliability for visual assessment
Models prioritize visual design elements like badge clutter
Partial alignment with human reasoning but challenges persist
🔎 Similar Papers
No similar papers found.