Compositional meta-learning through probabilistic task inference

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses few-shot meta-learning by proposing a compositional meta-learning framework grounded in probabilistic task inference. The method explicitly models tasks as structured compositions of reusable computational modules (e.g., functions or subroutines) and employs a generative probabilistic model to learn statistical priors over modules and their compositional regularities across a task distribution. Solving a new task is formulated as Bayesian structural inference—requiring no gradient-based adaptation—and enables rapid module identification and recombination from minimal data (e.g., one shot) via hypothesis testing alone. By unifying the representational power of neural networks with the data efficiency of probabilistic inference, the approach accurately recovers ground-truth modular structure and statistical dependencies in rule learning and robotic motion control benchmarks. It achieves substantial improvements in cross-task generalization and adaptation speed compared to gradient-based meta-learners.

Technology Category

Application Category

📝 Abstract
To solve a new task from minimal experience, it is essential to effectively reuse knowledge from previous tasks, a problem known as meta-learning. Compositional solutions, where common elements of computation are flexibly recombined into new configurations, are particularly well-suited for meta-learning. Here, we propose a compositional meta-learning model that explicitly represents tasks as structured combinations of reusable computations. We achieve this by learning a generative model that captures the underlying components and their statistics shared across a family of tasks. This approach transforms learning a new task into a probabilistic inference problem, which allows for finding solutions without parameter updates through highly constrained hypothesis testing. Our model successfully recovers ground truth components and statistics in rule learning and motor learning tasks. We then demonstrate its ability to quickly infer new solutions from just single examples. Together, our framework joins the expressivity of neural networks with the data-efficiency of probabilistic inference to achieve rapid compositional meta-learning.
Problem

Research questions and friction points this paper is trying to address.

Reusing knowledge from previous tasks with minimal new experience
Representing tasks as structured combinations of reusable computations
Transforming new task learning into probabilistic inference without parameter updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compositional meta-learning with probabilistic task inference
Generative model captures reusable computational components
Transforms learning into probabilistic inference without parameter updates
🔎 Similar Papers
No similar papers found.
Jacob J. W. Bakermans
Jacob J. W. Bakermans
University of Oxford
Pablo Tano
Pablo Tano
University of Geneva
R
Reidar Riveland
Department of Basic Neurosciences, University of Geneva, Switzerland
C
Charles Findling
Department of Basic Neurosciences, University of Geneva, Switzerland
Alexandre Pouget
Alexandre Pouget
Department of Basic Neurosciences, University of Geneva, Switzerland