🤖 AI Summary
Large language models (LLMs) lack systematic evaluation in high-stakes global political decision-making—particularly within institutional frameworks like the United Nations Security Council (UNSC)—where nuanced understanding of procedural rules, actor incentives, and dynamic diplomacy is critical.
Method: We introduce UNBench, the first benchmark explicitly designed for UNSC decision-making, comprising four tasks—co-sponsorship judgment, vote simulation, resolution adoption prediction, and diplomatic speech generation—spanning drafting, voting, and deliberation phases. Built on multi-source, temporally aligned UNSC data (1994–2024), it integrates prompt engineering, supervised fine-tuning, and a multi-task evaluation framework to support structured reasoning and generation under political constraints.
Contribution/Results: Comprehensive evaluation of state-of-the-art LLMs reveals fundamental limitations in institutional logic comprehension, positional consistency, and dynamic strategic interaction modeling. UNBench establishes the first reproducible, task-diverse AI benchmark and analytical paradigm for assessing LLMs in global governance contexts.
📝 Abstract
Large Language Models (LLMs) have achieved significant advances in natural language processing, yet their potential for high-stake political decision-making remains largely unexplored. This paper addresses the gap by focusing on the application of LLMs to the United Nations (UN) decision-making process, where the stakes are particularly high and political decisions can have far-reaching consequences. We introduce a novel dataset comprising publicly available UN Security Council (UNSC) records from 1994 to 2024, including draft resolutions, voting records, and diplomatic speeches. Using this dataset, we propose the United Nations Benchmark (UNBench), the first comprehensive benchmark designed to evaluate LLMs across four interconnected political science tasks: co-penholder judgment, representative voting simulation, draft adoption prediction, and representative statement generation. These tasks span the three stages of the UN decision-making process--drafting, voting, and discussing--and aim to assess LLMs' ability to understand and simulate political dynamics. Our experimental analysis demonstrates the potential and challenges of applying LLMs in this domain, providing insights into their strengths and limitations in political science. This work contributes to the growing intersection of AI and political science, opening new avenues for research and practical applications in global governance. The UNBench Repository can be accessed at: https://github.com/yueqingliang1/UNBench.