🤖 AI Summary
Existing research lacks a systematic evaluation methodology for proactive AI mediation agents in multi-stakeholder, multi-issue negotiation settings. This paper introduces the first evaluation framework grounded in social cognitive theory, comprising a configurable simulation testbed and a multidimensional metric suite—including consensus shift rate, intervention latency, and mediation efficacy—to quantitatively assess agent performance in consensus facilitation, timing of intervention, and socio-cognitive intelligence. The framework integrates large language models, principles from social-cognitive mediation theory, and a plug-in-based agent architecture, and defines hierarchical difficulty levels for negotiation scenarios. Empirical evaluation on the ProMediate-Hard benchmark demonstrates that our agent achieves a 3.6-percentage-point improvement in consensus shift (10.65% vs. 7.01%) and reduces average response latency by 77% (from 15.98 s to 3.71 s) over baselines, significantly enhancing proactive coordination capability in multi-stakeholder negotiations.
📝 Abstract
While Large Language Models (LLMs) are increasingly used in agentic frameworks to assist individual users, there is a growing need for agents that can proactively manage complex, multi-party collaboration. Systematic evaluation methods for such proactive agents remain scarce, limiting progress in developing AI that can effectively support multiple people together. Negotiation offers a demanding testbed for this challenge, requiring socio-cognitive intelligence to navigate conflicting interests between multiple participants and multiple topics and build consensus. Here, we present ProMediate, the first framework for evaluating proactive AI mediator agents in complex, multi-topic, multi-party negotiations. ProMediate consists of two core components: (i) a simulation testbed based on realistic negotiation cases and theory-driven difficulty levels (ProMediate-Easy, ProMediate-Medium, and ProMediate-Hard), with a plug-and-play proactive AI mediator grounded in socio-cognitive mediation theories, capable of flexibly deciding when and how to intervene; and (ii) a socio-cognitive evaluation framework with a new suite of metrics to measure consensus changes, intervention latency, mediator effectiveness, and intelligence. Together, these components establish a systematic framework for assessing the socio-cognitive intelligence of proactive AI agents in multi-party settings. Our results show that a socially intelligent mediator agent outperforms a generic baseline, via faster, better-targeted interventions. In the ProMediate-Hard setting, our social mediator increases consensus change by 3.6 percentage points compared to the generic baseline (10.65% vs 7.01%) while being 77% faster in response (15.98s vs. 3.71s). In conclusion, ProMediate provides a rigorous, theory-grounded testbed to advance the development of proactive, socially intelligent agents.