🤖 AI Summary
This study investigates how large language models (LLMs) internally represent the abstract notion of “scientific quality.” By applying sparse autoencoders to extract monosemantic features from LLMs and evaluating their predictive power through regression and classification tasks, the work systematically assesses these features’ ability to forecast established scientific quality indicators—such as citation counts, journal SJR scores, and author h-indices. The research identifies and categorizes four distinct classes of stable, interpretable features corresponding to research methodology, review article type, high-impact research domains, and domain-specific terminology. These findings reveal a structured encoding mechanism within LLMs for scientific quality, offering novel insights into how such models implicitly evaluate scholarly output.
📝 Abstract
In recent years, there has been a growing use of generative AI, and large language models (LLMs) in particular, to support both the assessment and generation of scientific work. Although some studies have shown that LLMs can, to a certain extent, evaluate research according to perceived quality, our understanding of the internal mechanisms that enable this capability remains limited. This paper presents the first study that investigates how LLMs encode the concept of scientific quality through relevant monosemantic features extracted using sparse autoencoders. We derive such features under different experimental settings and assess their ability to serve as predictors across three tasks related to research quality: predicting citation count, journal SJR, and journal h-index. The results indicate that LLMs encode features associated with multiple dimensions of scientific quality. In particular, we identify four recurring types of features that capture key aspects of how research quality is represented: 1) features reflecting research methodologies; 2) features related to publication type, with literature reviews typically exhibiting higher impact; 3) features associated with high-impact research fields and technologies; and 4) features corresponding to specific scientific jargons. These findings represent an important step toward understanding how LLMs encapsulate concepts related to research quality.