How Do LLMs Encode Scientific Quality? An Empirical Study Using Monosemantic Features from Sparse Autoencoders

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) internally represent the abstract notion of “scientific quality.” By applying sparse autoencoders to extract monosemantic features from LLMs and evaluating their predictive power through regression and classification tasks, the work systematically assesses these features’ ability to forecast established scientific quality indicators—such as citation counts, journal SJR scores, and author h-indices. The research identifies and categorizes four distinct classes of stable, interpretable features corresponding to research methodology, review article type, high-impact research domains, and domain-specific terminology. These findings reveal a structured encoding mechanism within LLMs for scientific quality, offering novel insights into how such models implicitly evaluate scholarly output.

Technology Category

Application Category

📝 Abstract
In recent years, there has been a growing use of generative AI, and large language models (LLMs) in particular, to support both the assessment and generation of scientific work. Although some studies have shown that LLMs can, to a certain extent, evaluate research according to perceived quality, our understanding of the internal mechanisms that enable this capability remains limited. This paper presents the first study that investigates how LLMs encode the concept of scientific quality through relevant monosemantic features extracted using sparse autoencoders. We derive such features under different experimental settings and assess their ability to serve as predictors across three tasks related to research quality: predicting citation count, journal SJR, and journal h-index. The results indicate that LLMs encode features associated with multiple dimensions of scientific quality. In particular, we identify four recurring types of features that capture key aspects of how research quality is represented: 1) features reflecting research methodologies; 2) features related to publication type, with literature reviews typically exhibiting higher impact; 3) features associated with high-impact research fields and technologies; and 4) features corresponding to specific scientific jargons. These findings represent an important step toward understanding how LLMs encapsulate concepts related to research quality.
Problem

Research questions and friction points this paper is trying to address.

scientific quality
large language models
monosemantic features
sparse autoencoders
research evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

monosemantic features
sparse autoencoders
scientific quality
large language models
interpretable representations
M
Michael McCoubrey
Knowledge Media Institute, The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom
A
Angelo Salatino
Knowledge Media Institute, The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom
Francesco Osborne
Francesco Osborne
KMi, The Open University
Science of ScienceInformation ExtractionKnowledge GraphsArtificial IntelligenceSemantic Web
Enrico Motta
Enrico Motta
Professor of Knowledge Technologies, KMi, The Open University
Semantic WebOntology EngineeringKnowledge SystemsData Science