🤖 AI Summary
Systematic evaluation of architecture design and optimization strategies for multi-step language program pipelines (LPPs) remains lacking. Method: We introduce LangProBe, the first large-scale benchmark comprising 2,000+ task–architecture–optimizer–model combinations, and propose a cross-model (GPT, Claude, Llama, etc.) collaborative evaluation framework integrating modular prompt programming and multi-level optimization—including gradient approximation, search, and meta-prompting. Contribution/Results: Our framework enables the first quantitative analysis of how program architecture, optimizer choice, and model selection jointly shape the quality–cost Pareto frontier. Experiments show that automated optimization significantly improves efficiency, yet human priors remain indispensable; optimized LPPs consistently achieve Pareto-dominant improvements over baseline model invocations. All code and evaluation data are publicly released to advance reproducible, comparable research on language programs.
📝 Abstract
Composing language models (LMs) into multi-step language programs and automatically optimizing their modular prompts is now a mainstream paradigm for building AI systems, but the tradeoffs in this space have only scarcely been studied before. We introduce LangProBe, the first large-scale benchmark for evaluating the architectures and optimization strategies for language programs, with over 2000 combinations of tasks, architectures, optimizers, and choices of LMs. Using LangProBe, we are the first to study the impact of program architectures and optimizers (and their compositions together and with different models) on tradeoffs of quality and cost. We find that optimized language programs offer strong cost--quality Pareto improvement over raw calls to models, but simultaneously demonstrate that human judgment (or empirical decisions) about which compositions to pursue is still necessary for best performance. We will open source the code and evaluation data for LangProBe.