InfiCoEvalChain: A Blockchain-Based Decentralized Framework for Collaborative LLM Evaluation

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of large language models (LLMs) suffer from centralization, lack of transparency, overfitting, and hardware-dependent performance fluctuations, leading to unreliable ranking statistics. This work proposes the first blockchain-based decentralized evaluation framework that leverages a distributed incentive mechanism to aggregate heterogeneous computing nodes worldwide, enabling collaborative and robust benchmarking across diverse environments and parameter settings. By integrating a consensus protocol and a verifiable reward system, the framework establishes a novel evaluation paradigm that is incentive-compatible, cheat-resistant, and highly transparent. Experimental results demonstrate a significant improvement in ranking confidence: on HumanEval, the standard deviation of ten runs for a single model drops from 1.67 to 0.28. The platform has been fully implemented and validated.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) demands increasingly reliable evaluation, yet current centralized evaluation suffers from opacity, overfitting, and hardware-induced variance. Our empirical analysis reveals an alarming inconsistency in existing evaluations: the standard deviation across ten repeated runs of a single model on HumanEval (1.67) actually exceeds the performance gap among the top-10 models on the official leaderboard (0.91), rendering current rankings statistically precarious. To mitigate these instabilities, we propose a decentralized evaluation framework that enables hardware and parameter diversity through large-scale benchmarking across heterogeneous compute nodes. By leveraging the blockchain-based protocol, the framework incentivizes global contributors to act as independent validators, using a robust reward system to ensure evaluation integrity and discourage dishonest participation. This collective verification transforms evaluation from a"centralized black box"into a"decentralized endorsement"where multi-party consensus and diverse inference environments yield a more stable, representative metric. Experimental results demonstrate that the decentralized evaluation framework reduces the standard deviation across ten runs on the same model to 0.28. This significant improvement over conventional frameworks ensures higher statistical confidence in model rankings. We have completely implemented this platform and will soon release it to the community.
Problem

Research questions and friction points this paper is trying to address.

LLM evaluation
centralized evaluation
evaluation instability
hardware variance
statistical reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

blockchain-based evaluation
decentralized LLM benchmarking
hardware diversity
collective verification
evaluation stability
🔎 Similar Papers
No similar papers found.
Y
Yifan Yang
The Hong Kong Polytechnic University
J
Jinjia Li
InfiX.ai
K
Kunxi Li
Zhejiang University
P
Puhao Zheng
The Hong Kong Polytechnic University
Y
Yuanyi Wang
The Hong Kong Polytechnic University
Z
Zheyan Qu
InfiX.ai
Y
Yang Yu
The Hong Kong Polytechnic University
J
Jianmin Wu
InfiX.ai
Ming Li
Ming Li
Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University
Cyber-Physical SystemBlockchainLarge Language ModelsESG Technologies
Hongxia Yang
Hongxia Yang
Professor, HK Polytechnic University
Machine LearningGenerative AICognitive IntelligenceStatistical Modeling