LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In machine learning, model and hyperparameter selection traditionally relies on expert knowledge or computationally expensive search procedures, resulting in low efficiency and poor generalization. This paper proposes leveraging large language models (LLMs) as in-context meta-learners: given only dataset meta-features—such as dimensionality, sample size, and class distribution—it constructs interpretable, zero-shot prompts that enable direct, meta-enhanced in-context inference to recommend both models and hyperparameters—without training, fine-tuning, or search. The core contribution lies in explicitly harnessing LLMs’ implicit meta-learning capabilities to achieve cross-task knowledge transfer. Evaluated on synthetic benchmarks and real-world datasets (e.g., OpenML), the approach significantly outperforms conventional heuristic strategies, achieving competitive accuracy and strong generalization across diverse tasks. This work establishes a novel paradigm for training-free, automated model selection.

Technology Category

Application Category

📝 Abstract
Model and hyperparameter selection are critical but challenging in machine learning, typically requiring expert intuition or expensive automated search. We investigate whether large language models (LLMs) can act as in-context meta-learners for this task. By converting each dataset into interpretable metadata, we prompt an LLM to recommend both model families and hyperparameters. We study two prompting strategies: (1) a zero-shot mode relying solely on pretrained knowledge, and (2) a meta-informed mode augmented with examples of models and their performance on past tasks. Across synthetic and real-world benchmarks, we show that LLMs can exploit dataset metadata to recommend competitive models and hyperparameters without search, and that improvements from meta-informed prompting demonstrate their capacity for in-context meta-learning. These results highlight a promising new role for LLMs as lightweight, general-purpose assistants for model selection and hyperparameter optimization.
Problem

Research questions and friction points this paper is trying to address.

Automating model and hyperparameter selection without expert intervention
Using LLMs as meta-learners for recommending competitive configurations
Evaluating prompting strategies for in-context meta-learning capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs recommend models and hyperparameters using metadata
Zero-shot and meta-informed prompting strategies for selection
LLMs act as lightweight meta-learners without search
🔎 Similar Papers
No similar papers found.