🤖 AI Summary
Current multimodal large language models lack fine-grained, row- and column-level evidence attribution capabilities when answering questions about structured tables. This work introduces the first systematic evaluation framework for visual table attribution, assessing mainstream models across three table formats—images, Markdown, and JSON—using diverse prompting strategies to measure row- and column-level attribution accuracy. The study reveals that while models achieve moderate question-answering accuracy, their attribution performance is substantially weaker, particularly in JSON format where it approaches random levels. Row-level attributions consistently outperform column-level ones, and image-based representations yield more reliable attributions than textual formats. These findings highlight a critical gap between accuracy and interpretability in multimodal reasoning over structured data.
📝 Abstract
Multimodal Large Language Models (mLLMs) are often used to answer questions in structured data such as tables in Markdown, JSON, and images. While these models can often give correct answers, users also need to know where those answers come from. In this work, we study structured data attribution/citation, which is the ability of the models to point to the specific rows and columns that support an answer. We evaluate several mLLMs across different table formats and prompting strategies. Our results show a clear gap between question answering and evidence attribution. Although question answering accuracy remains moderate, attribution accuracy is much lower, near random for JSON inputs, across all models. We also find that models are more reliable at citing rows than columns, and struggle more with textual formats than images. Finally, we observe notable differences across model families. Overall, our findings show that current mLLMs are unreliable at providing fine-grained, trustworthy attribution for structured data, which limits their usage in applications requiring transparency and traceability.