Format as a Prior: Quantifying and Analyzing Bias in LLMs for Heterogeneous Data

📅 2025-08-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a systematic “format bias” in large language models (LLMs) when processing heterogeneous data—including text, tables, infoboxes, and knowledge graphs—leading to reasoning distortions and downstream risks. Through a three-stage empirical analysis (existence validation → driver identification → mechanism dissection), we first establish the pervasive presence of this bias across major LLM families. We identify information richness, structural quality, and representation type as key drivers, with imbalanced attention allocation constituting the underlying mechanism. Building on this insight, we propose a lightweight attention reweighting intervention and derive three concrete mitigation strategies: (1) optimizing data preprocessing pipelines, (2) designing inference-time intervention mechanisms, and (3) constructing format-balanced training corpora. Our findings provide both theoretical foundations and practical guidelines for developing fairer and more robust systems for heterogeneous data understanding.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly employed in applications that require processing information from heterogeneous formats, including texts, tables, infoboxes, and knowledge graphs. However, systematic biases toward particular formats may undermine LLMs'ability to integrate heterogeneous data impartially, potentially resulting in reasoning errors and increased risks in downstream tasks. Yet it remains unclear whether such biases are systematic, which data-level factors drive them, and what internal mechanisms underlie their emergence. In this paper, we present the first comprehensive study of format bias in LLMs through a three-stage empirical analysis. The first stage explores the presence and direction of bias across a diverse range of LLMs. The second stage examines how key data-level factors influence these biases. The third stage analyzes how format bias emerges within LLMs'attention patterns and evaluates a lightweight intervention to test its effectiveness. Our results show that format bias is consistent across model families, driven by information richness, structure quality, and representation type, and is closely associated with attention imbalance within the LLMs. Based on these investigations, we identify three future research directions to reduce format bias: enhancing data pre-processing through format repair and normalization, introducing inference-time interventions such as attention re-weighting, and developing format-balanced training corpora. These directions will support the design of more robust and fair heterogeneous data processing systems.
Problem

Research questions and friction points this paper is trying to address.

Investigates systematic format bias in LLMs processing heterogeneous data
Identifies data factors like information richness driving format bias
Analyzes attention patterns underlying bias emergence in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantifying format bias through three-stage empirical analysis
Analyzing bias via attention patterns and lightweight interventions
Proposing format repair, attention re-weighting, and balanced training
🔎 Similar Papers
No similar papers found.
J
Jiacheng Liu
School of Computer Science, Wuhan University, China
Mayi Xu
Mayi Xu
Wuhan University
Natural Language Processing
Q
Qiankun Pi
School of Computer Science, Wuhan University, China
W
Wenli Li
School of Computer Science, Wuhan University, China
M
Ming Zhong
School of Computer Science, Wuhan University, China
Y
Yuanyuan Zhu
School of Computer Science, Wuhan University, China
M
Mengchi Liu
School of Computer Science, Wuhan University, China
Tieyun Qian
Tieyun Qian
Wuhan University
natural language processingweb data mining