🤖 AI Summary
Existing hybrid parameter-efficient fine-tuning (PEFT) methods face two key bottlenecks in domain adaptation: (1) the absence of uncertainty quantification, undermining decision reliability; and (2) inability to dynamically adapt to streaming new data. To address these, we propose the first Bayesian hybrid PEFT framework, unifying Adapter, LoRA, and prefix tuning. Our approach introduces Bayesian parameter modeling and a dynamic posterior-to-prior transfer mechanism, enabling uncertainty-aware continual learning. Crucially, it is the first to embed Bayesian inference into a hybrid PEFT architecture—preserving parameter efficiency while supporting evolutionary model updates and calibrated confidence estimation. Experiments across sentiment analysis, news classification, and commonsense reasoning demonstrate statistically significant improvements over state-of-the-art PEFT baselines, with superior uncertainty modeling fidelity and online adaptability.
📝 Abstract
Large Language Models (LLMs) have demonstrated transformative potential in reshaping the world. As these models are pretrained on general corpora, they often require domain-specific fine-tuning to optimize performance in specialized business applications. Due to their massive scale, parameter-efficient fine-tuning (PEFT) methods are widely used to reduce training costs. Among them, hybrid PEFT methods that combine multiple PEFT techniques have achieved the best performance. However, existing hybrid PEFT methods face two main challenges when fine-tuning LLMs for specialized applications: (1) relying on point estimates, lacking the ability to quantify uncertainty for reliable decision-making, and (2) struggling to dynamically adapt to emerging data, lacking the ability to suit real-world situations. We propose Bayesian Hybrid Parameter-Efficient Fine-Tuning (BH-PEFT), a novel method that integrates Bayesian learning into hybrid PEFT. BH-PEFT combines Adapter, LoRA, and prefix-tuning to fine-tune feedforward and attention layers of the Transformer. By modeling learnable parameters as distributions, BH-PEFT enables uncertainty quantification. We further propose a Bayesian dynamic fine-tuning approach where the last posterior serves as the prior for the next round, enabling effective adaptation to new data. We evaluated BH-PEFT on business tasks such as sentiment analysis, news categorization, and commonsense reasoning. Results show that our method outperforms existing PEFT baselines, enables uncertainty quantification for more reliable decisions, and improves adaptability in dynamic scenarios. This work contributes to business analytics and data science by proposing a novel BH-PEFT method and dynamic fine-tuning approach that support uncertainty-aware and adaptive decision-making in real-world situations.