DEP: A Decentralized Large Language Model Evaluation Protocol

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of large language models suffer from a lack of standardized protocols, reliance on ad hoc scripts, poor reproducibility, and risks of data leakage due to centralized frameworks. To address these issues, this work proposes DEP (Decentralized Evaluation Protocol), a novel decentralized evaluation architecture that decouples users, models, and benchmarks via a matching server, enabling modular, plug-and-play evaluation workflows deployable locally or remotely. DEP ensures data isolation and privacy while fostering community-driven benchmark development and long-term reusability. The accompanying DEP Toolkit supports checkpoint-resumable evaluation, concurrency control, and congestion management, and establishes a standardized interface for integrating new benchmarks. Empirical validation across more than 60 benchmarks demonstrates significantly reduced deployment overhead and enables unified, multi-task, multi-domain model assessment.

Technology Category

Application Category

📝 Abstract
With the rapid development of Large Language Models (LLMs), a large number of benchmarks have been proposed. However, most benchmarks lack unified evaluation standard and require the manual implementation of custom scripts, making results hard to ensure consistency and reproducibility. Furthermore, mainstream evaluation frameworks are centralized, with datasets and answers, which increases the risk of benchmark leakage. To address these issues, we propose a Decentralized Evaluation Protocol (DEP), a decentralized yet unified and standardized evaluation framework through a matching server without constraining benchmarks. The server can be mounted locally or deployed remotely, and once adapted, it can be reused over the long term. By decoupling users, LLMs, and benchmarks, DEP enables modular, plug-and-play evaluation: benchmark files and evaluation logic stay exclusively on the server side. In remote setting, users cannot access the ground truth, thereby achieving data isolation and leak-proof evaluation. To facilitate practical adoption, we develop DEP Toolkit, a protocol-compatible toolkit that supports features such as breakpoint resume, concurrent requests, and congestion control. We also provide detailed documentation for adapting new benchmarks to DEP. Using DEP toolkit, we evaluate multiple LLMs across benchmarks. Experimental results verify the effectiveness of DEP and show that it reduces the cost of deploying benchmark evaluations. As of February 2026, we have adapted over 60 benchmarks and continue to promote community co-construction to support unified evaluation across various tasks and domains.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
evaluation benchmark
reproducibility
benchmark leakage
decentralized evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Evaluation
Large Language Models
Benchmark Leakage Prevention
Modular Evaluation Framework
DEP Toolkit
🔎 Similar Papers
No similar papers found.
Jianxiang Peng
Jianxiang Peng
Tianjin University
NLP
Junhao Li
Junhao Li
Assistant Project Scientist, Cognitive Science, University of California, San Diego
Non-coding RNAsDNA methylationEpigeneticsBioinformatics
H
Hongxiang Wang
TJUNLP Lab, Tianjin University, Tianjin, China
H
Haocheng Lyu
TJUNLP Lab, Tianjin University, Tianjin, China
H
Hui Guo
TJUNLP Lab, Tianjin University, Tianjin, China
S
Siyi Hao
TJUNLP Lab, Tianjin University, Tianjin, China
Zhen Wang
Zhen Wang
PhD student, Jilin university, China
Machine learningSVMPattern recognition
C
Chuang Liu
National Supercomputing Center in Tianjin
S
Shaowei Zhang
TJUNLP Lab, Tianjin University, Tianjin, China
B
Bojian Xiong
TJUNLP Lab, Tianjin University, Tianjin, China
Yue Chen
Yue Chen
School of Art Design and Media, East China University of Science and Technology
Human factors and ergonomics
Z
Zhuowen Han
TJUNLP Lab, Tianjin University, Tianjin, China
Ling Shi
Ling Shi
Tianjin University
NLPLLM
T
Tianyu Dong
TJUNLP Lab, Tianjin University, Tianjin, China
J
Juesi Xiao
TJUNLP Lab, Tianjin University, Tianjin, China
Lei Yang
Lei Yang
Professor, South China University Technology
mobile cloud computingedge computingInternet of Things
Y
Yuqi Ren
TJUNLP Lab, Tianjin University, Tianjin, China; Yuanhui AI
Deyi Xiong
Deyi Xiong
Professor, College of Intelligence and Computing, Tianjin University, China
Natural Language ProcessingLarge Language ModelsAI4Science