๐ค AI Summary
In multi-agent collaborative reasoning, erroneous prompt design leads to error propagation during inference. Method: This paper proposes an unsupervised, zero-training, LLM-based self-feedback framework for real-time prompt refinement: it dynamically parses errors from LLM-generated textual feedback, iteratively rewrites prompts, and adapts them to multi-step reasoning tasksโrequiring no additional annotations or model fine-tuning. Contribution/Results: Evaluated on five mathematical reasoning benchmarks, our method improves over zero-shot chain-of-thought by 3โ37 percentage points, substantially narrowing the performance gap between small and large language models. It is the first approach to enable purely text-feedback-driven online prompt optimization, enhancing both reliability and scalability of multi-agent reasoning systems.
๐ Abstract
Agentic workflows, where multiple AI agents collaborate to accomplish complex tasks like reasoning or planning, are becoming increasingly prevalent. However, these workflows often suffer from error propagation and sub-optimal performance, largely due to poorly designed prompts that fail to effectively guide individual agents. This is a critical problem because it limits the reliability and scalability of these powerful systems. We introduce ProRefine, an innovative inference-time prompt optimization method that leverages textual feedback from large language models (LLMs) to address this challenge. ProRefine dynamically refines prompts for multi-step reasoning tasks without additional training or ground truth labels. Evaluated on five benchmark mathematical reasoning datasets, ProRefine significantly surpasses zero-shot Chain-of-Thought baselines by 3 to 37 percentage points. This approach not only boosts accuracy but also allows smaller models to match the performance of larger ones, highlighting its potential for efficient and scalable AI deployment, and democratizing access to high-performing AI.