🤖 AI Summary
Scientific peer review is governed by unarticulated tacit norms—such as narrative preferences and structural biases—that remain poorly understood and difficult to measure.
Method: We propose a novel paradigm using large language models (LLMs) as diagnostic instruments, implementing a hybrid framework comprising LLM self-reasoning to generate testable hypotheses, paired-paper comparative analysis, and iterative hypothesis generation with correlation-based validation—dynamically tracking the evolution of “normative priors” into “posteriors.”
Contribution/Results: This work is the first to systematically extract and quantify implicit evaluation criteria from LLM self-dialogues. Results reveal a shifting preference in reviews—from theoretical rigor toward cross-literature narrative positioning. Human explicit scores correlate moderately with LLM-derived priors (r = 0.49), yet exhibit significant avoidance of narrative elements (r = −0.14), confirming the existence of a tacit reward mechanism. The methodology provides a reproducible, interpretable pathway for deconstructing implicit social norms in scholarly evaluation.
📝 Abstract
This paper calls on the research community not only to investigate how human biases are inherited by large language models (LLMs) but also to explore how these biases in LLMs can be leveraged to make society's"unwritten code"- such as implicit stereotypes and heuristics - visible and accessible for critique. We introduce a conceptual framework through a case study in science: uncovering hidden rules in peer review - the factors that reviewers care about but rarely state explicitly due to normative scientific expectations. The idea of the framework is to push LLMs to speak out their heuristics through generating self-consistent hypotheses - why one paper appeared stronger in reviewer scoring - among paired papers submitted to 45 computer science conferences, while iteratively searching deeper hypotheses from remaining pairs where existing hypotheses cannot explain. We observed that LLMs' normative priors about the internal characteristics of good science extracted from their self-talk, e.g. theoretical rigor, were systematically updated toward posteriors that emphasize storytelling about external connections, such as how the work is positioned and connected within and across literatures. This shift reveals the primacy of scientific myths about intrinsic properties driving scientific excellence rather than extrinsic contextualization and storytelling that influence conceptions of relevance and significance. Human reviewers tend to explicitly reward aspects that moderately align with LLMs' normative priors (correlation = 0.49) but avoid articulating contextualization and storytelling posteriors in their review comments (correlation = -0.14), despite giving implicit reward to them with positive scores. We discuss the broad applicability of the framework, leveraging LLMs as diagnostic tools to surface the tacit codes underlying human society, enabling more precisely targeted responsible AI.