Silicon Bureaucracy and AI Test-Oriented Education: Contamination Sensitivity and Score Confidence in LLM Benchmarks

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of large language models (LLMs) rely excessively on public benchmarks, whose scores are vulnerable to training data contamination or semantic leakage, making it difficult to distinguish genuine generalization from “test-taking” behavior. This work proposes an audit framework based on a router-worker architecture that quantifies the sensitivity of benchmark scores to contamination by comparing model performance under clean conditions against systematically perturbed inputs—including deletion, rewriting, and noise injection. The study introduces, for the first time, the concepts of “silicon bureaucracy” and “AI test-prep education” to construct an interpretable metric for contamination sensitivity. Experiments reveal that several prominent models exhibit anomalous performance improvements under perturbation, exposing significant heterogeneity in capability reliability despite identical benchmark scores, thereby challenging the validity of current evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Public benchmarks increasingly govern how large language models (LLMs) are ranked, selected, and deployed. We frame this benchmark-centered regime as Silicon Bureaucracy and AI Test-Oriented Education, and argue that it rests on a fragile assumption: that benchmark scores directly reflect genuine generalization. In practice, however, such scores may conflate exam-oriented competence with principled capability, especially when contamination and semantic leakage are difficult to exclude from modern training pipelines. We therefore propose an audit framework for analyzing contamination sensitivity and score confidence in LLM benchmarks. Using a router-worker setup, we compare a clean-control condition with noisy conditions in which benchmark problems are systematically deleted, rewritten, and perturbed before being passed downstream. For a genuinely clean benchmark, noisy conditions should not consistently outperform the clean-control baseline. Yet across multiple models, we find widespread but heterogeneous above-baseline gains under noisy conditions, indicating that benchmark-related cues may be reassembled and can reactivate contamination-related memory. These results suggest that similar benchmark scores may carry substantially different levels of confidence. Rather than rejecting benchmarks altogether, we argue that benchmark-based evaluation should be supplemented with explicit audits of contamination sensitivity and score confidence.
Problem

Research questions and friction points this paper is trying to address.

benchmark contamination
score confidence
large language models
generalization
semantic leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

contamination sensitivity
score confidence
benchmark auditing
LLM evaluation
semantic leakage
🔎 Similar Papers
No similar papers found.
Y
Yiliang Song
Institute of Artificial Intelligence (TeleAI), China Telecom
H
Hongjun An
Institute of Artificial Intelligence (TeleAI), China Telecom
J
Jiangan Chen
Guangxi Normal University
X
Xuanchen Yan
Northwestern Polytechnical University
Huan Song
Huan Song
Amazon AWS AI
Deep learningmachine learninggraph neural networkstime-series analysis
J
Jiawei Shao
Institute of Artificial Intelligence (TeleAI), China Telecom
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI), China Telecom