Interpreting Multi-Attribute Confounding through Numerical Attributes in Large Language Models

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how large language models (LLMs) represent multi-attribute numerical entities and exposes critical vulnerabilities in numerical reasoning: (1) how multiple numerical attributes of a single entity are internally integrated, and (2) how irrelevant numerical context interferes with representation and output. Using linear probing, partial correlation analysis, and prompt-based vulnerability testing, we conduct a cross-model empirical study across model scales. We first demonstrate that numerical attributes consistently reside in a shared low-dimensional latent subspace; uncover systematic amplification of real-world numerical correlations; and identify stable, magnitude-specific representation shifts induced by irrelevant contextual numbers. These findings reveal decision fragility arising from representational entanglement and expose model-scale-dependent downstream impact patterns. The results provide interpretable foundations for improving numerical fairness and introduce “representation-aware control” as a novel optimization paradigm grounded in internal representation analysis.

Technology Category

Application Category

📝 Abstract
Although behavioral studies have documented numerical reasoning errors in large language models (LLMs), the underlying representational mechanisms remain unclear. We hypothesize that numerical attributes occupy shared latent subspaces and investigate two questions:(1) How do LLMs internally integrate multiple numerical attributes of a single entity? (2)How does irrelevant numerical context perturb these representations and their downstream outputs? To address these questions, we combine linear probing with partial correlation analysis and prompt-based vulnerability tests across models of varying sizes. Our results show that LLMs encode real-world numerical correlations but tend to systematically amplify them. Moreover, irrelevant context induces consistent shifts in magnitude representations, with downstream effects that vary by model size. These findings reveal a vulnerability in LLM decision-making and lay the groundwork for fairer, representation-aware control under multi-attribute entanglement.
Problem

Research questions and friction points this paper is trying to address.

Investigating numerical attribute integration mechanisms in LLMs
Examining irrelevant context effects on LLM representations
Revealing systematic amplification of real-world correlations in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear probing with partial correlation analysis
Prompt-based vulnerability tests across models
Investigating shared latent subspaces of attributes
🔎 Similar Papers
No similar papers found.
H
Hirohane Takagi
The University of Tokyo
Gouki Minegishi
Gouki Minegishi
University of Tokyo
Deep LearningInterpretability
S
Shota Kizawa
The University of Tokyo
I
Issey Sukeda
The University of Tokyo
Hitomi Yanaka
Hitomi Yanaka
The University of Tokyo, RIKEN
Natural Language ProcessingSemantics