π€ AI Summary
This work addresses the issue of miscalibration and overconfidence in multimodal large language models (MLLMs) for audio question answering, which undermines their reliability in risk-sensitive applications. To mitigate this, the study introduces the Improved Variational Online Newton (IVON) optimizer into the fine-tuning processβan approach that explicitly models parameter uncertainty through variational inference. By incorporating uncertainty-aware optimization, the proposed method not only enhances predictive accuracy but also significantly improves confidence calibration, thereby alleviating overconfidence. This advancement yields more trustworthy outputs, making the model better suited for high-stakes scenarios such as selective prediction, where calibrated confidence estimates are critical for decision-making.
π Abstract
Variational inference (VI) provides a principled framework for estimating posterior distributions over model parameters, enabling explicit modeling of weight uncertainty during optimization. By capturing this uncertainty, VI improves the reliability of predictions, yielding better calibrated outputs. In this work, we investigate the benefits of VI for challenging multimodal understanding and reasoning by applying the Improved Variational Online Newton (IVON), a recent VI optimizer, to fine-tuning a multimodal large language model on audio question answering tasks. Our results show that VI not only enhances predictive accuracy but also significantly improves calibration, reducing the model's overconfidence. These advances further support risk-sensitive applications such as selective prediction, where reliable confidence estimates are crucial.