🤖 AI Summary
Optimizing configurations for LLM-based agent systems faces a prohibitively large search space, where existing heuristic or exhaustive methods suffer from low efficiency and poor effectiveness. Method: This paper proposes a lightweight multi-view performance predictor that jointly encodes three workflow representations—code architecture, textual prompts, and interaction graphs—incorporating graph neural networks and prompt embeddings, and introduces a novel cross-domain unsupervised pretraining paradigm to drastically reduce reliance on costly real-task evaluations. Contribution/Results: Evaluated on three domain-specific benchmarks, our method surpasses state-of-the-art approaches in both prediction accuracy and workflow practicality: it reduces task success rate prediction error by 27.3% and cuts trial-and-error evaluation overhead by 81%, thereby significantly accelerating the identification of optimal agent workflows.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, but optimizing LLM-based agentic systems remains challenging due to the vast search space of agent configurations, prompting strategies, and communication patterns. Existing approaches often rely on heuristic-based tuning or exhaustive evaluation, which can be computationally expensive and suboptimal. This paper proposes Agentic Predictor, a lightweight predictor for efficient agentic workflow evaluation. Agentic Predictor is equipped with a multi-view workflow encoding technique that leverages multi-view representation learning of agentic systems by incorporating code architecture, textual prompts, and interaction graph features. To achieve high predictive accuracy while significantly reducing the number of required workflow evaluations for training a predictor, Agentic Predictor employs cross-domain unsupervised pretraining. By learning to approximate task success rates, Agentic Predictor enables fast and accurate selection of optimal agentic workflow configurations for a given task, significantly reducing the need for expensive trial-and-error evaluations. Experiments on a carefully curated benchmark spanning three domains show that our predictor outperforms state-of-the-art methods in both predictive accuracy and workflow utility, highlighting the potential of performance predictors in streamlining the design of LLM-based agentic workflows.