Factual Inconsistency in Data-to-Text Generation Scales Exponentially with LLM Size: A Statistical Validation

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies an **exponential scaling law** governing factual inconsistency in data-to-text (D2T) generation by large language models (LLMs), challenging the prevailing power-law scaling hypothesis. To rigorously characterize this phenomenon, we propose the first three-stage statistical validation framework—comprising performance prediction, goodness-of-fit testing, and model comparison—integrated with four state-of-the-art factual consistency metrics. We conduct systematic evaluation across three LLM families and five D2T benchmarks. Empirical results consistently demonstrate that increasing model size significantly exacerbates factual errors, refuting the “bigger is more reliable” assumption. This work provides a critical theoretical warning and methodological foundation for trustworthy D2T generation, and establishes, for the first time, the exponential scaling of factual inconsistency in D2T.

Technology Category

Application Category

📝 Abstract
Monitoring factual inconsistency is essential for ensuring trustworthiness in data-to-text generation (D2T). While large language models (LLMs) have demonstrated exceptional performance across various D2T tasks, previous studies on scaling laws have primarily focused on generalization error through power law scaling to LLM size (i.e., the number of model parameters). However, no research has examined the impact of LLM size on factual inconsistency in D2T. In this paper, we investigate how factual inconsistency in D2T scales with LLM size by exploring two scaling laws: power law and exponential scaling. To rigorously evaluate and compare these scaling laws, we employ a statistical validation framework consisting of three key stages: predictive performance estimation, goodness-of-fit assessment, and comparative analysis. For a comprehensive empirical study, we analyze three popular LLM families across five D2T datasets, measuring factual inconsistency inversely using four state-of-the-art consistency metrics. Our findings, based on exhaustive empirical results and validated through our framework, reveal that, contrary to the widely assumed power law scaling, factual inconsistency in D2T follows an exponential scaling with LLM size.
Problem

Research questions and friction points this paper is trying to address.

Examine factual inconsistency scaling in D2T
Compare power law and exponential scaling impacts
Validate scaling laws using statistical framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exponential scaling of inconsistency
Statistical validation framework
Multiple LLM families analysis
🔎 Similar Papers
No similar papers found.
J
Joy Mahapatra
Indian Statistical Institute Kolkata
S
Soumyajit Roy
Indian Statistical Institute Kolkata
Utpal Garain
Utpal Garain
Indian Statistical Institute
Deep LearningTrustworthy AI SystemsLanguage ModelsMedical data analytics