🤖 AI Summary
This work identifies a systematic “format bias” in large language models (LLMs) when processing heterogeneous data—including text, tables, infoboxes, and knowledge graphs—leading to reasoning distortions and downstream risks. Through a three-stage empirical analysis (existence validation → driver identification → mechanism dissection), we first establish the pervasive presence of this bias across major LLM families. We identify information richness, structural quality, and representation type as key drivers, with imbalanced attention allocation constituting the underlying mechanism. Building on this insight, we propose a lightweight attention reweighting intervention and derive three concrete mitigation strategies: (1) optimizing data preprocessing pipelines, (2) designing inference-time intervention mechanisms, and (3) constructing format-balanced training corpora. Our findings provide both theoretical foundations and practical guidelines for developing fairer and more robust systems for heterogeneous data understanding.
📝 Abstract
Large Language Models (LLMs) are increasingly employed in applications that require processing information from heterogeneous formats, including texts, tables, infoboxes, and knowledge graphs. However, systematic biases toward particular formats may undermine LLMs'ability to integrate heterogeneous data impartially, potentially resulting in reasoning errors and increased risks in downstream tasks. Yet it remains unclear whether such biases are systematic, which data-level factors drive them, and what internal mechanisms underlie their emergence. In this paper, we present the first comprehensive study of format bias in LLMs through a three-stage empirical analysis. The first stage explores the presence and direction of bias across a diverse range of LLMs. The second stage examines how key data-level factors influence these biases. The third stage analyzes how format bias emerges within LLMs'attention patterns and evaluates a lightweight intervention to test its effectiveness. Our results show that format bias is consistent across model families, driven by information richness, structure quality, and representation type, and is closely associated with attention imbalance within the LLMs. Based on these investigations, we identify three future research directions to reduce format bias: enhancing data pre-processing through format repair and normalization, introducing inference-time interventions such as attention re-weighting, and developing format-balanced training corpora. These directions will support the design of more robust and fair heterogeneous data processing systems.