🤖 AI Summary
Fine-tuned large language models (LLMs) often lack reliable posterior uncertainty estimates, hindering trustworthy downstream decision-making. Method: This paper proposes a Bayesian mixture-of-experts (MoE) framework that requires no retraining and introduces no additional parameters. Its core innovation is the first application of structured Laplace approximation to the second-layer linear weights of MoE, combined with block-wise low-rank Kronecker decomposition for scalable, modular posterior inference. Expert path selection and uncertainty propagation enable lightweight prediction confidence calibration. Results: Experiments on Qwen1.5-MoE and DeepSeek-MoE demonstrate significant reductions in expected calibration error (ECE) and negative log-likelihood (NLL), substantially improving downstream decision reliability. The method provides an efficient, plug-and-play paradigm for uncertainty quantification in MoE-based LLMs without architectural or training modifications.
📝 Abstract
We present Bayesian Mixture of Experts (Bayesian-MoE), a post-hoc uncertainty estimation framework for fine-tuned large language models (LLMs) based on Mixture-of-Experts architectures. Our method applies a structured Laplace approximation to the second linear layer of each expert, enabling calibrated uncertainty estimation without modifying the original training procedure or introducing new parameters. Unlike prior approaches, which apply Bayesian inference to added adapter modules, Bayesian-MoE directly targets the expert pathways already present in MoE models, leveraging their modular design for tractable block-wise posterior estimation. We use Kronecker-factored low-rank approximations to model curvature and derive scalable estimates of predictive uncertainty and marginal likelihood. Experiments on common-sense reasoning benchmarks with Qwen1.5-MoE and DeepSeek-MoE demonstrate that Bayesian-MoE improves both expected calibration error (ECE) and negative log-likelihood (NLL) over baselines, confirming its effectiveness for reliable downstream decision-making.