ViTaB-A: Evaluating Multimodal Large Language Models on Visual Table Attribution

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models lack fine-grained, row- and column-level evidence attribution capabilities when answering questions about structured tables. This work introduces the first systematic evaluation framework for visual table attribution, assessing mainstream models across three table formats—images, Markdown, and JSON—using diverse prompting strategies to measure row- and column-level attribution accuracy. The study reveals that while models achieve moderate question-answering accuracy, their attribution performance is substantially weaker, particularly in JSON format where it approaches random levels. Row-level attributions consistently outperform column-level ones, and image-based representations yield more reliable attributions than textual formats. These findings highlight a critical gap between accuracy and interpretability in multimodal reasoning over structured data.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (mLLMs) are often used to answer questions in structured data such as tables in Markdown, JSON, and images. While these models can often give correct answers, users also need to know where those answers come from. In this work, we study structured data attribution/citation, which is the ability of the models to point to the specific rows and columns that support an answer. We evaluate several mLLMs across different table formats and prompting strategies. Our results show a clear gap between question answering and evidence attribution. Although question answering accuracy remains moderate, attribution accuracy is much lower, near random for JSON inputs, across all models. We also find that models are more reliable at citing rows than columns, and struggle more with textual formats than images. Finally, we observe notable differences across model families. Overall, our findings show that current mLLMs are unreliable at providing fine-grained, trustworthy attribution for structured data, which limits their usage in applications requiring transparency and traceability.
Problem

Research questions and friction points this paper is trying to address.

multimodal large language models
structured data attribution
table citation
evidence attribution
trustworthiness
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal large language models
structured data attribution
visual table reasoning
evidence citation
model transparency
🔎 Similar Papers
No similar papers found.
Y
Yahia Alqurnawi
School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
P
Preetom Biswas
School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
A
Anmol Rao
School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
T
Tejas Anvekar
School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
Chitta Baral
Chitta Baral
Professor of Computer Science, Arizona State University
Knowledge RepresentationNLPVisionRoboticsIntegrated Systems
Vivek Gupta
Vivek Gupta
Assistant Professor of Computer Science, Arizona State University
Artificial IntelligenceNatural Language ProcessingLarge Language ModelsInformation Retrieval