🤖 AI Summary
To address the challenges of quantifying agent performance and lacking effective credibility assessment mechanisms in multi-agent systems, this paper proposes the Dynamic Reputation Filtering (DRF) framework. DRF constructs an interactive scoring network that jointly models agent honesty and capability, introduces a dynamic reputation scoring mechanism, and integrates an Upper Confidence Bound (UCB)-driven agent selection strategy. Unlike static or single-metric evaluation approaches, DRF enables online reputation updates and task-adaptive agent filtering. Experiments on logical reasoning and code generation tasks demonstrate that DRF improves task completion quality by 23.6% and collaboration efficiency by 18.4% over baseline methods. Moreover, DRF exhibits strong scalability, making it suitable for large-scale multi-agent coordination scenarios.
📝 Abstract
With the evolution of generative AI, multi - agent systems leveraging large - language models(LLMs) have emerged as a powerful tool for complex tasks. However, these systems face challenges in quantifying agent performance and lack mechanisms to assess agent credibility. To address these issues, we introduce DRF, a dynamic reputation filtering framework. DRF constructs an interactive rating network to quantify agent performance, designs a reputation scoring mechanism to measure agent honesty and capability, and integrates an Upper Confidence Bound - based strategy to enhance agent selection efficiency. Experiments show that DRF significantly improves task completion quality and collaboration efficiency in logical reasoning and code - generation tasks, offering a new approach for multi - agent systems to handle large - scale tasks.