Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluations suffer severely from contamination of public benchmark data, leading to inflated and inaccurate capability estimates. Method: Instead of constructing new dynamic benchmarks, this work introduces the novel concept of “shortcut neurons”—neurons that develop biased activation patterns during training—and proposes a causal contrastive analysis to identify them. Building on this, we establish the first “shortcut neuron repair” evaluation paradigm, which applies neuron-level causal interventions, parameter sensitivity analysis, and dynamic neural repair—without modifying any benchmark. Contribution/Results: Experiments across multiple benchmarks and hyperparameter configurations demonstrate that our method significantly mitigates contamination effects. Evaluation scores align closely with those of the trusted, uncontaminated MixEval benchmark (Spearman ρ > 0.95), substantially improving the accuracy and reliability of LLM capability assessment.

Technology Category

Application Category

📝 Abstract
The development of large language models (LLMs) depends on trustworthy evaluation. However, most current evaluations rely on public benchmarks, which are prone to data contamination issues that significantly compromise fairness. Previous researches have focused on constructing dynamic benchmarks to address contamination. However, continuously building new benchmarks is costly and cyclical. In this work, we aim to tackle contamination by analyzing the mechanisms of contaminated models themselves. Through our experiments, we discover that the overestimation of contaminated models is likely due to parameters acquiring shortcut solutions in training. We further propose a novel method for identifying shortcut neurons through comparative and causal analysis. Building on this, we introduce an evaluation method called shortcut neuron patching to suppress shortcut neurons. Experiments validate the effectiveness of our approach in mitigating contamination. Additionally, our evaluation results exhibit a strong linear correlation with MixEval, a recently released trustworthy benchmark, achieving a Spearman coefficient ($ ho$) exceeding 0.95. This high correlation indicates that our method closely reveals true capabilities of the models and is trustworthy. We conduct further experiments to demonstrate the generalizability of our method across various benchmarks and hyperparameter settings. Code: https://github.com/GaryStack/Trustworthy-Evaluation
Problem

Research questions and friction points this paper is trying to address.

Addressing data contamination in LLM evaluation benchmarks
Identifying shortcut neurons causing model overestimation
Proposing trustworthy evaluation via shortcut neuron patching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies shortcut neurons via comparative analysis
Proposes shortcut neuron patching evaluation method
Validates method with high correlation benchmarks
K
Kejian Zhu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences
Shangqing Tu
Shangqing Tu
Tsinghua University, graduate student
Trustworthy AILarge Language ModelAI for Education
Zhuoran Jin
Zhuoran Jin
Institute of Automation, Chinese Academy of Sciences
Large Language ModelsNatural Language ProcessingKnowledge Engineering
Lei Hou
Lei Hou
RMIT University
Building Information Modeling (BIM) - Project Management - Construction IT - Productivity Research - Lean Construction
Juanzi Li
Juanzi Li
Tsinghua University
Semantic Webdata miningNLP
J
Jun Zhao
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences