🤖 AI Summary
In low-data, high-noise regimes, automating the integration of prior knowledge into learning models remains challenging. Method: This paper proposes “Informed Meta-Learning,” a novel paradigm that enables automatic, controllable construction and injection of inductive biases from interpretable representations—such as natural language—into meta-learners. We introduce the Informed Neural Process (INP), which jointly couples semantic parsing with conditional meta-learning to explicitly encode human-specified knowledge into the meta-parameterization. Contribution/Results: Evaluated on few-shot regression and classification across multiple tasks, INP achieves up to a 37% performance gain using ≤5 support samples per task, significantly improving data efficiency and generalization robustness under noise. Our framework provides an interpretable, scalable foundation for knowledge-guided efficient learning.
📝 Abstract
A significant challenge in machine learning, particularly in noisy and low-data environments, lies in effectively incorporating inductive biases to enhance data efficiency and robustness. Despite the success of informed machine learning methods, designing algorithms with explicit inductive biases remains largely a manual process. In this work, we explore how prior knowledge represented in its native formats, e.g. in natural language, can be integrated into machine learning models in an automated manner. Inspired by the learning to learn principles of meta-learning, we consider the approach of learning to integrate knowledge via conditional meta-learning, a paradigm we refer to as informed meta-learning. We introduce and motivate theoretically the principles of informed meta-learning enabling automated and controllable inductive bias selection. To illustrate our claims, we implement an instantiation of informed meta-learning--the Informed Neural Process, and empirically demonstrate the potential benefits and limitations of informed meta-learning in improving data efficiency and generalisation.