Learning to Order: Task Sequencing as In-Context Optimization

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited few-shot generalization capability of task sequencing in novel scenarios—such as robotic assembly and autonomous driving—by proposing a meta-learning-based solution. The approach leverages large-scale synthetically generated task sequences modeled as paths in directed graphs, enabling effective meta-learning under an unbounded task prior for the first time. Employing a Transformer-based architecture, the model learns to rapidly infer optimal execution orders from only a few demonstrations by treating task sequences as graph traversal problems. Experimental results demonstrate that the proposed method significantly outperforms non-meta-learning baselines, achieving superior efficiency in discovering optimal task sequences under few-shot conditions and thereby validating its strong generalization ability.

Technology Category

Application Category

📝 Abstract
Task sequencing (TS) is one of the core open problems in Deep Learning, arising in a plethora of real-world domains, from robotic assembly lines to autonomous driving. Unfortunately, prior work has not convincingly demonstrated the generalization ability of meta-learned TS methods to solve new TS problems, given few initial demonstrations. In this paper, we demonstrate that deep neural networks can meta-learn over an infinite prior of synthetically generated TS problems and achieve a few-shot generalization. We meta-learn a transformer-based architecture over datasets of sequencing trajectories generated from a prior distribution that samples sequencing problems as paths in directed graphs. In a large-scale experiment, we provide ample empirical evidence that our meta-learned models discover optimal task sequences significantly quicker than non-meta-learned baselines.
Problem

Research questions and friction points this paper is trying to address.

Task Sequencing
Few-shot Generalization
Meta-learning
Deep Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

task sequencing
meta-learning
in-context optimization
transformer architecture
few-shot generalization
🔎 Similar Papers
No similar papers found.
J
Jan Kobiolka
Department of Computer Science and Artificial Intelligence, University of Technology Nuremberg, Germany
C
Christian Frey
Department of Computer Science and Artificial Intelligence, University of Technology Nuremberg, Germany
Arlind Kadra
Arlind Kadra
PhD, University of Freiburg
Deep LearningMeta-LearningAutoML
Gresa Shala
Gresa Shala
PhD candidate, University of Freiburg
Meta-learningDynamic Algorithm ConfigurationReinforcement Learning
Josif Grabocka
Josif Grabocka
Professor of Machine Learning
Machine Learning