🤖 AI Summary
To address the challenge of rapid adaptation to novel tasks in few-shot multitask learning, this paper proposes a Bayesian meta-learning framework based on neural variational Dropout. The method introduces: (1) a task-conditioned low-rank Bernoulli expert meta-model that dynamically predicts lightweight, memory-efficient Dropout rates; (2) a task-dependent prior informed by full-task data, jointly modeling parameter uncertainty and functional ambiguity; and (3) variational inference to jointly optimize the shared backbone and task-specific Dropout configurations, enabling plug-and-play network reconstruction without fine-tuning. Evaluated on 1D stochastic regression, image inpainting, and few-shot classification, the approach significantly improves generalization and inference robustness while reducing parameter redundancy.
📝 Abstract
Learning to infer the conditional posterior model is a key step for robust meta-learning. This paper presents a new Bayesian meta-learning approach called Neural Variational Dropout Processes (NVDPs). NVDPs model the conditional posterior distribution based on a task-specific dropout; a low-rank product of Bernoulli experts meta-model is utilized for a memory-efficient mapping of dropout rates from a few observed contexts. It allows for a quick reconfiguration of a globally learned and shared neural network for new tasks in multi-task few-shot learning. In addition, NVDPs utilize a novel prior conditioned on the whole task data to optimize the conditional extit{dropout} posterior in the amortized variational inference. Surprisingly, this enables the robust approximation of task-specific dropout rates that can deal with a wide range of functional ambiguities and uncertainties. We compared the proposed method with other meta-learning approaches in the few-shot learning tasks such as 1D stochastic regression, image inpainting, and classification. The results show the excellent performance of NVDPs.