Prompt-MII: Meta-Learning Instruction Induction for LLMs

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference overhead of large language models (LLMs) in long-context tasks, this paper proposes PROMPT-MII, a meta-learning and reinforcement learning–based instruction induction framework that automatically compresses in-context learning (ICL) examples into compact, generalizable instructions—replacing redundant demonstrations. It introduces the first meta-training protocol across thousands of HuggingFace classification datasets, enabling dynamic, task-adaptive instruction generation in zero-shot settings. Evaluated on 90 unseen tasks, PROMPT-MII achieves 4–9 percentage-point (10%–20% relative) F1 improvements over standard ICL while matching or exceeding its accuracy, reducing token consumption by 3–13×. Key contributions include: (1) the first instruction compression framework supporting large-scale, cross-task meta-learning; (2) efficient zero-shot instruction generation without model fine-tuning; and (3) substantial inference cost reduction without performance degradation.

Technology Category

Application Category

📝 Abstract
A popular method to adapt large language models (LLMs) to new tasks is in-context learning (ICL), which is effective but incurs high inference costs as context length grows. In this paper we propose a method to perform instruction induction, where we take training examples and reduce them to a compact but descriptive prompt that can achieve performance comparable to ICL over the full training set. Specifically, we propose PROMPT-MII, a reinforcement learning (RL) based framework to meta-learn an instruction induction model that can generate compact instructions on the fly for an arbitrary new dataset. We train on over 3,000 diverse classification datasets from the HuggingFace hub, and evaluate on 90 unseen tasks. PROMPT-MII improves downstream model quality by 4-9 F1 points (10-20% relative), matching ICL performance while requiring 3-13x fewer tokens.
Problem

Research questions and friction points this paper is trying to address.

Reducing high inference costs of in-context learning
Generating compact prompts matching full training performance
Meta-learning instruction induction for diverse classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learns instruction induction via reinforcement learning
Generates compact prompts from training examples
Reduces token usage while matching ICL performance
🔎 Similar Papers
No similar papers found.