Prompt Optimization via Retrieved Reasoning Assets and Multi-Agent Analysis

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing prompt optimization methods treat evaluation as a black box, relying solely on numerical feedback without providing interpretable diagnostics for failure causes, resulting in opaque, unauditible, and uncontrollable optimization. To address this, we propose MA-SAPO—a Multi-Agent Scoring-Aware Prompt Optimization framework—that maps evaluation scores to specific prompt deficiencies via structured reasoning chains and generates evidence-backed revision suggestions. Its key innovation is a reusable reasoning asset mechanism, integrating retrieval augmentation with multi-agent collaborative diagnosis to enable transparent, auditable, and controllable iterative optimization. On the HelpSteer1/2 benchmarks, MA-SAPO significantly outperforms single-prompt baselines, retrieval-augmented approaches, and state-of-the-art multi-agent methods, demonstrating systematic improvement and strong generalization.

Technology Category

Application Category

📝 Abstract
Prompt optimization has emerged as an effective alternative to retraining for improving the performance of Large Language Models (LLMs). However, most existing approaches treat evaluation as a black box, relying solely on numerical scores while offering limited insight into why a prompt succeeds or fails. They also depend heavily on trial-and-error refinements, which are difficult to interpret and control. In this paper, we introduce MA-SAPO, a Multi-Agent framework for Score-Aware Prompt Optimization. Compared to prior methods, MA-SAPO explicitly couples evaluation outcomes with structured reasoning to guide systematic edits. The framework specifically consists of two stages: during the Reasoning Phase, agents collaboratively explain metric scores, diagnose weaknesses, and synthesize targeted refinements that are stored as reusable reasoning assets; during the Test Phase, agents retrieve these assets to analyze optimized prompts and apply only evidence-grounded edits. By turning evaluation signals into interpretable reasoning chains, MA-SAPO produces prompt refinements that are more transparent, auditable, and controllable. Experiments on the HelpSteer1/2 benchmarks demonstrate consistent improvements over single-pass prompting, retrieval-augmented baselines, and prior multi-agent strategies, validating the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts through reusable reasoning assets and multi-agent analysis
Providing transparent insights into prompt success and failure reasons
Systematically guiding prompt edits using evidence-grounded evaluation signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework for score-aware prompt optimization
Retrieves reusable reasoning assets to guide edits
Turns evaluation signals into interpretable reasoning chains
🔎 Similar Papers
No similar papers found.
Wonduk Seo
Wonduk Seo
PKU Alumni; Enhans
Machine LearningText MiningInformation RetrievalSocial ComputingBioinformatics
Juhyeon Lee
Juhyeon Lee
Peking University
LLM
J
Junseo Koh
Peking University, Beijing, China
H
Hyunjin An
Enhans, Seoul, Korea
J
Jian Park
Fudan University, Shanghai, China
S
Seunghyun Lee
Enhans, Seoul, Korea
H
Haihua Chen
University of North Texas, Texas, USA
Yi Bu
Yi Bu
Assistant Professor, Department of Information Management, Peking University
scholarly communicationbibliometricsscience policyscience of scienceinnovation