๐ค AI Summary
Existing LLM-based agent workflows rely on static templates or manual design, suffering from limited generalization and scalability. Method: We propose a natural-language-driven meta-learning frameworkโthe first to integrate Model-Agnostic Meta-Learning (MAML) into language agent workflow optimization. Our approach employs subtask-level adaptive initialization and bi-level optimization: inner-loop fine-tuning for task-specific adaptation and outer-loop updates of the shared initialization to enable dynamic workflow evolution. Crucially, workflow modifications are guided entirely by LLM-generated feedback and natural-language instructions, eliminating manual intervention. Contribution/Results: Evaluated on question answering, code generation, and mathematical reasoning, our method consistently outperforms both handcrafted and automated search baselines, achieving multiple state-of-the-art results. It significantly enhances cross-task and cross-model generalization, demonstrating robust adaptability without human-designed structures.
๐ Abstract
Recent advances in large language models (LLMs) have sparked growing interest in agentic workflows, which are structured sequences of LLM invocations intended to solve complex tasks. However, existing approaches often rely on static templates or manually designed workflows, which limit adaptability to diverse tasks and hinder scalability. We propose AdaptFlow, a natural language-based meta-learning framework inspired by model-agnostic meta-learning (MAML). AdaptFlow learns a generalizable workflow initialization that enables rapid subtask-level adaptation. It employs a bi-level optimization scheme: the inner loop refines the workflow for a specific subtask using LLM-generated feedback, while the outer loop updates the shared initialization to perform well across tasks. This setup allows AdaptFlow to generalize effectively to unseen tasks by adapting the initialized workflow through language-guided modifications. Evaluated across question answering, code generation, and mathematical reasoning benchmarks, AdaptFlow consistently outperforms both manually crafted and automatically searched baselines, achieving state-of-the-art results with strong generalization across tasks and models. The source code and data are available at https://github.com/microsoft/DKI_LLM/tree/AdaptFlow/AdaptFlow.