🤖 AI Summary
This work addresses the often-overlooked issue of behavioral entanglement among large language models (LLMs) in multi-model systems, where shared training data and alignment procedures can induce hidden dependencies that lead to synchronized errors and undermine system reliability—particularly in applications like LLM-as-a-judge. To tackle this, we propose the first black-box auditing framework for behavioral entanglement, introducing two information-theoretic metrics: a difficulty-weighted behavioral entanglement index and Cumulative Information Gain (CIG). Our analysis demonstrates a strong correlation between CIG and degraded judgment accuracy, with Spearman’s ρ reaching up to 0.71. Building on these insights, we design a disentangled ensemble reweighting method, which, when evaluated across 18 mainstream LLMs, improves validation accuracy by up to 4.5% over majority voting.
📝 Abstract
The rapid growth of the large language model (LLM) ecosystem raises a critical question: are seemingly diverse models truly independent? Shared pretraining data, distillation, and alignment pipelines can induce hidden behavioral dependencies, latent entanglement, that undermine multi-model systems such as LLM-as-a-judge pipelines and ensemble verification, which implicitly assume independent signals. In practice, this manifests as correlated reasoning patterns and synchronized failures, where apparent agreement reflects shared error modes rather than independent validation. To address this, we develop a statistical framework for auditing behavioral entanglement among black-box LLMs. Our approach introduces a multi-resolution hierarchy that characterizes the joint failure manifold through two information-theoretic metrics: (i) a Difficulty-Weighted Behavioral Entanglement Index, which amplifies synchronized failures on easy tasks, and (ii) a Cumulative Information Gain (CIG) metric, which captures directional alignment in erroneous responses. Through extensive experiments on 18 LLMs from six model families, we identify widespread behavioral entanglement and analyze its impact on LLM-as-a-judge evaluation. We find that CIG exhibits a statistically significant association with degradation in judge precision, with Spearman coefficient of 0.64 (p < 0.001) for GPT-4o-mini and 0.71 (p < 0.01) for Llama3-based judges, indicating that stronger dependency corresponds to increased over-endorsement bias. Finally, we demonstrate a practical use case of entanglement through de-entangled verifier ensemble reweighting. By adjusting model contributions based on inferred independence, the proposed method mitigates correlated bias and improves verification performance, achieving up to a 4.5% accuracy gain over majority voting.