Decentralized Arena: Towards Democratic and Scalable Automatic Evaluation of Language Models

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM benchmarks face three critical challenges: closed-ended evaluation rapidly saturates, human crowdsourcing is costly and inefficient, and single-model judges introduce systematic bias. To address these, we propose the first decentralized automated evaluation framework, wherein all models collectively generate intelligent judgments through pairwise mutual assessment—eliminating reliance on a single authoritative judge. Methodologically, we design a sub-quadratic coarse-to-fine dynamic ranking algorithm, integrated with adaptive evaluation dimension construction, multi-model collaborative arbitration, and automated prompt generation. Evaluated on 66 mainstream LLMs, our framework achieves a 97% correlation with human judgments while drastically reducing evaluation cost. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
The recent explosion of large language models (LLMs), each with its own general or specialized strengths, makes scalable, reliable benchmarking more urgent than ever. Standard practices nowadays face fundamental trade-offs: closed-ended question-based benchmarks (eg MMLU) struggle with saturation as newer models emerge, while crowd-sourced leaderboards (eg Chatbot Arena) rely on costly and slow human judges. Recently, automated methods (eg LLM-as-a-judge) shed light on the scalability, but risk bias by relying on one or a few"authority"models. To tackle these issues, we propose Decentralized Arena (dearena), a fully automated framework leveraging collective intelligence from all LLMs to evaluate each other. It mitigates single-model judge bias by democratic, pairwise evaluation, and remains efficient at scale through two key components: (1) a coarse-to-fine ranking algorithm for fast incremental insertion of new models with sub-quadratic complexity, and (2) an automatic question selection strategy for the construction of new evaluation dimensions. Across extensive experiments across 66 LLMs, dearena attains up to 97% correlation with human judgements, while significantly reducing the cost. Our code and data will be publicly released on https://github.com/maitrix-org/de-arena.
Problem

Research questions and friction points this paper is trying to address.

Scalable and reliable benchmarking of diverse large language models (LLMs)
Mitigating bias in automated evaluation by avoiding single-model authority
Reducing cost and maintaining efficiency in LLM evaluation processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages collective intelligence from all LLMs
Uses democratic pairwise evaluation to reduce bias
Employs coarse-to-fine ranking for efficient scalability