CoE: Collaborative Entropy for Uncertainty Quantification in Agentic Multi-LLM Systems

πŸ“… 2026-03-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing approaches that model uncertainty only within individual large language models, failing to capture semantic disagreements across multiple models. To overcome this, the paper introduces Collaborative Entropy (CoE), an information-theoretic unified metric that jointly quantifies intra-model semantic entropy and inter-model pairwise average divergence within a shared semantic clustering space. CoE enables, for the first time, system-level semantic uncertainty quantification without requiring any training and supports post-hoc coordination via heuristic strategies. Experimental results demonstrate that CoE significantly outperforms conventional entropy- and divergence-based baselines on TriviaQA and SQuAD, with performance gains becoming more pronounced as model heterogeneity increases.
πŸ“ Abstract
Uncertainty estimation in multi-LLM systems remains largely single-model-centric: existing methods quantify uncertainty within each model but do not adequately capture semantic disagreement across models. To address this gap, we propose Collaborative Entropy (CoE), a unified information-theoretic metric for semantic uncertainty in multi-LLM collaboration. CoE is defined on a shared semantic cluster space and combines two components: intra-model semantic entropy and inter-model divergence to the ensemble mean. CoE is not a weighted ensemble predictor; it is a system-level uncertainty measure that characterizes collaborative confidence and disagreement. We analyze several core properties of CoE, including non-negativity, zero-value certainty under perfect semantic consensus, and the behavior of CoE when individual models collapse to delta distributions. These results clarify when reducing per-model uncertainty is sufficient and when residual inter-model disagreement remains. We also present a simple CoE-guided, training-free post-hoc coordination heuristic as a practical application of the metric. Experiments on \textit{TriviaQA} and \textit{SQuAD} with LLaMA-3.1-8B-Instruct, Qwen-2.5-7B-Instruct, and Mistral-7B-Instruct show that CoE provides stronger uncertainty estimation than standard entropy- and divergence-based baselines, with gains becoming larger as additional heterogeneous models are introduced. Overall, CoE offers a useful uncertainty-aware perspective on multi-LLM collaboration.
Problem

Research questions and friction points this paper is trying to address.

uncertainty quantification
multi-LLM systems
semantic disagreement
collaborative entropy
inter-model divergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative Entζ—₯梈息
semantic uncertainty
multi-LLM systems
information-theoretic metric
ensemble disagreement
πŸ”Ž Similar Papers
No similar papers found.
K
Kangkang Sun
Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
J
Jun Wu
Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
J
Jianhua Li
Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
Minyi Guo
Minyi Guo
IEEE Fellow, Chair Professor, Shanghai Jiao Tong University
Parallel ComputingCompiler OptimizationCloud ComputingNetworkingBig Data
X
Xiuzhen Che
Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
Jianwei Huang
Jianwei Huang
Texas A&M University