🤖 AI Summary
Current evaluations of large language models (LLMs) suffer from centralization, lack of transparency, overfitting, and hardware-dependent performance fluctuations, leading to unreliable ranking statistics. This work proposes the first blockchain-based decentralized evaluation framework that leverages a distributed incentive mechanism to aggregate heterogeneous computing nodes worldwide, enabling collaborative and robust benchmarking across diverse environments and parameter settings. By integrating a consensus protocol and a verifiable reward system, the framework establishes a novel evaluation paradigm that is incentive-compatible, cheat-resistant, and highly transparent. Experimental results demonstrate a significant improvement in ranking confidence: on HumanEval, the standard deviation of ten runs for a single model drops from 1.67 to 0.28. The platform has been fully implemented and validated.
📝 Abstract
The rapid advancement of large language models (LLMs) demands increasingly reliable evaluation, yet current centralized evaluation suffers from opacity, overfitting, and hardware-induced variance. Our empirical analysis reveals an alarming inconsistency in existing evaluations: the standard deviation across ten repeated runs of a single model on HumanEval (1.67) actually exceeds the performance gap among the top-10 models on the official leaderboard (0.91), rendering current rankings statistically precarious. To mitigate these instabilities, we propose a decentralized evaluation framework that enables hardware and parameter diversity through large-scale benchmarking across heterogeneous compute nodes. By leveraging the blockchain-based protocol, the framework incentivizes global contributors to act as independent validators, using a robust reward system to ensure evaluation integrity and discourage dishonest participation. This collective verification transforms evaluation from a"centralized black box"into a"decentralized endorsement"where multi-party consensus and diverse inference environments yield a more stable, representative metric. Experimental results demonstrate that the decentralized evaluation framework reduces the standard deviation across ten runs on the same model to 0.28. This significant improvement over conventional frameworks ensures higher statistical confidence in model rankings. We have completely implemented this platform and will soon release it to the community.