LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks

📅 2024-05-23
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Reliable decision-making demands well-calibrated uncertainty estimates, yet large language models often exhibit overconfidence, and explicit ensemble methods incur prohibitive computational and memory overhead. To address this, we propose a parameter-efficient implicit ensemble framework: it shares a pretrained Transformer backbone while assigning independent low-rank self-attention projection matrices—implemented via LoRA—to each ensemble member, marking the first extension of LoRA to implicit ensembling. Crucially, our method introduces zero additional memory overhead during inference and incurs computational cost nearly identical to that of a single model. On multiple benchmarks, it achieves prediction accuracy competitive with explicit ensembles while substantially improving calibration—reducing Expected Calibration Error (ECE) by up to 37%—outperforming state-of-the-art implicit approaches such as BatchEnsemble.

Technology Category

Application Category

📝 Abstract
Numerous real-world decisions rely on machine learning algorithms and require calibrated uncertainty estimates. However, modern methods often yield overconfident, uncalibrated predictions. The dominant approach to quantifying the uncertainty inherent in the model is to train an ensemble of separate predictors and measure their empirical variance. In an explicit implementation, the ensemble has high computational cost and memory footprint, especially if the base model itself is already large, like modern transformers. This motivates efforts to develop implicit ensemble methods that emulate the ensemble without explicitly instantiating all its members. We introduce LoRA-Ensemble, a parameter-efficient ensembling method for self-attention networks. It is based on Low-Rank Adaptation (LoRA), originally developed for efficient LLM fine-tuning, and extends it into an implicit ensembling scheme, where all ensemble members share the same, pre-trained self-attention network, but have individual low-rank matrices for the attention projections. The resulting method not only outperforms state-of-the-art implicit techniques like BatchEnsemble, but even matches or exceeds the accuracy of an Explicit Ensemble, while at the same time achieving superior calibration.
Problem

Research questions and friction points this paper is trying to address.

Machine learning needs calibrated uncertainty estimates for decisions
Existing ensemble methods are computationally expensive for large models
Current techniques often produce overconfident, uncalibrated predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Low-Rank Adaptation for ensembling
Shares pre-trained self-attention network
Individual low-rank matrices for attention