AdaptFlow: Adaptive Workflow Optimization via Meta-Learning

๐Ÿ“… 2025-08-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM-based agent workflows rely on static templates or manual design, suffering from limited generalization and scalability. Method: We propose a natural-language-driven meta-learning frameworkโ€”the first to integrate Model-Agnostic Meta-Learning (MAML) into language agent workflow optimization. Our approach employs subtask-level adaptive initialization and bi-level optimization: inner-loop fine-tuning for task-specific adaptation and outer-loop updates of the shared initialization to enable dynamic workflow evolution. Crucially, workflow modifications are guided entirely by LLM-generated feedback and natural-language instructions, eliminating manual intervention. Contribution/Results: Evaluated on question answering, code generation, and mathematical reasoning, our method consistently outperforms both handcrafted and automated search baselines, achieving multiple state-of-the-art results. It significantly enhances cross-task and cross-model generalization, demonstrating robust adaptability without human-designed structures.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in large language models (LLMs) have sparked growing interest in agentic workflows, which are structured sequences of LLM invocations intended to solve complex tasks. However, existing approaches often rely on static templates or manually designed workflows, which limit adaptability to diverse tasks and hinder scalability. We propose AdaptFlow, a natural language-based meta-learning framework inspired by model-agnostic meta-learning (MAML). AdaptFlow learns a generalizable workflow initialization that enables rapid subtask-level adaptation. It employs a bi-level optimization scheme: the inner loop refines the workflow for a specific subtask using LLM-generated feedback, while the outer loop updates the shared initialization to perform well across tasks. This setup allows AdaptFlow to generalize effectively to unseen tasks by adapting the initialized workflow through language-guided modifications. Evaluated across question answering, code generation, and mathematical reasoning benchmarks, AdaptFlow consistently outperforms both manually crafted and automatically searched baselines, achieving state-of-the-art results with strong generalization across tasks and models. The source code and data are available at https://github.com/microsoft/DKI_LLM/tree/AdaptFlow/AdaptFlow.
Problem

Research questions and friction points this paper is trying to address.

Optimizing agentic workflows for diverse complex tasks
Overcoming limitations of static templates in LLM workflows
Enhancing adaptability and scalability via meta-learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning framework for workflow optimization
Bi-level optimization with LLM feedback
Language-guided adaptation for unseen tasks
๐Ÿ”Ž Similar Papers
No similar papers found.