🤖 AI Summary
Existing meta-learning and hypernetwork approaches for few-shot tabular classification suffer from slow inference, reliance on fine-tuning or task-specific hyperparameters, and limited generalization—particularly failing to support zero-shot, multi-class classification on arbitrary datasets. This paper introduces MotherNet: a Transformer-based hypernetwork trained via synthetic-task meta-learning to generate context-aware, task-specific subnetwork weights in a single forward pass—without gradient updates, dataset-specific fine-tuning, or hyperparameter tuning. The distilled lightweight subnetwork achieves competitive accuracy on small-data tabular benchmarks relative to TabPFN and gradient-boosted trees, while offering significantly faster inference than TabPFN. Crucially, MotherNet is the first method to enable zero-shot, tuning-free, hyperparameter-free neural weight generation for arbitrary numeric tabular data—overcoming the conventional hypernetwork limitation of being confined to fixed, multi-task, closed-domain settings.
📝 Abstract
Foundation models are transforming machine learning across many modalities, with in-context learning replacing classical model training. Recent work on tabular data hints at a similar opportunity to build foundation models for classification for numerical data. However, existing meta-learning approaches can not compete with tree-based methods in terms of inference time. In this paper, we propose MotherNet, a hypernetwork architecture trained on synthetic classification tasks that, once prompted with a never-seen-before training set generates the weights of a trained ``child'' neural-network by in-context learning using a single forward pass. In contrast to most existing hypernetworks that are usually trained for relatively constrained multi-task settings, MotherNet can create models for multiclass classification on arbitrary tabular datasets without any dataset specific gradient descent. The child network generated by MotherNet outperforms neural networks trained using gradient descent on small datasets, and is comparable to predictions by TabPFN and standard ML methods like Gradient Boosting. Unlike a direct application of TabPFN, MotherNet generated networks are highly efficient at inference time. We also demonstrate that HyperFast is unable to perform effective in-context learning on small datasets, and heavily relies on dataset specific fine-tuning and hyper-parameter tuning, while MotherNet requires no fine-tuning or per-dataset hyper-parameters.