Bayesian Mixture of Experts For Large Language Models

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuned large language models (LLMs) often lack reliable posterior uncertainty estimates, hindering trustworthy downstream decision-making. Method: This paper proposes a Bayesian mixture-of-experts (MoE) framework that requires no retraining and introduces no additional parameters. Its core innovation is the first application of structured Laplace approximation to the second-layer linear weights of MoE, combined with block-wise low-rank Kronecker decomposition for scalable, modular posterior inference. Expert path selection and uncertainty propagation enable lightweight prediction confidence calibration. Results: Experiments on Qwen1.5-MoE and DeepSeek-MoE demonstrate significant reductions in expected calibration error (ECE) and negative log-likelihood (NLL), substantially improving downstream decision reliability. The method provides an efficient, plug-and-play paradigm for uncertainty quantification in MoE-based LLMs without architectural or training modifications.

Technology Category

Application Category

📝 Abstract
We present Bayesian Mixture of Experts (Bayesian-MoE), a post-hoc uncertainty estimation framework for fine-tuned large language models (LLMs) based on Mixture-of-Experts architectures. Our method applies a structured Laplace approximation to the second linear layer of each expert, enabling calibrated uncertainty estimation without modifying the original training procedure or introducing new parameters. Unlike prior approaches, which apply Bayesian inference to added adapter modules, Bayesian-MoE directly targets the expert pathways already present in MoE models, leveraging their modular design for tractable block-wise posterior estimation. We use Kronecker-factored low-rank approximations to model curvature and derive scalable estimates of predictive uncertainty and marginal likelihood. Experiments on common-sense reasoning benchmarks with Qwen1.5-MoE and DeepSeek-MoE demonstrate that Bayesian-MoE improves both expected calibration error (ECE) and negative log-likelihood (NLL) over baselines, confirming its effectiveness for reliable downstream decision-making.
Problem

Research questions and friction points this paper is trying to address.

Estimating uncertainty in fine-tuned large language models
Applying Bayesian inference to existing MoE expert pathways
Improving calibration and likelihood for reliable decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Mixture of Experts for LLM uncertainty estimation
Structured Laplace approximation to expert linear layers
Kronecker-factored low-rank approximations for scalable uncertainty
🔎 Similar Papers
No similar papers found.
M
Maryam Dialameh
University of Waterloo, Waterloo, Canada
H
Hossein Rajabzadeh
University of Waterloo, Waterloo, Canada
W
Weiwei Zhang
Ascend Team, Huawei Technologies, Toronto, Canada
Walid Ahmed
Walid Ahmed
Huawei Technologies Canada
Deep LearningMachine LearningSoft Computing
H
Hyock Ju Kwon
University of Waterloo, Waterloo, Canada