Large Language Models for History, Philosophy, and Sociology of Science: Interpretive Uses, Methodological Challenges, and Critical Perspectives

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses epistemological challenges arising from the integration of large language models (LLMs) into History, Philosophy, and Sociology of Science (HPSS), where LLMs’ contextual inference capabilities coexist with implicit, non-neutral semantic assumptions that undermine interpretive research traditions. Method: The authors propose an “LLM-as-cognitive-infrastructure” framework, advocating HPSS scholars’ leadership in defining evaluation criteria and treating model selection as an interpretive stance. They systematically compare full-context and generative paradigms across text structuring, pattern recognition, and dynamic process modeling, integrating continual pretraining, fine-tuning, and retrieval-augmented generation (RAG). Contribution/Results: Four integrative principles are established: (1) models as interpretive artifacts; (2) critical digital literacy as foundational; (3) domain-specific corpus curation and benchmarking; and (4) technological augmentation—not replacement—of hermeneutic practice. The work advances HPSS from passive LLM users to co-constructors of LLM epistemology.

Technology Category

Application Category

📝 Abstract
This paper explores the use of large language models (LLMs) as research tools in the history, philosophy, and sociology of science (HPSS). LLMs are remarkably effective at processing unstructured text and inferring meaning from context, offering new affordances that challenge long-standing divides between computational and interpretive methods. This raises both opportunities and challenges for HPSS, which emphasizes interpretive methodologies and understands meaning as context-dependent, ambiguous, and historically situated. We argue that HPSS is uniquely positioned not only to benefit from LLMs' capabilities but also to interrogate their epistemic assumptions and infrastructural implications. To this end, we first offer a concise primer on LLM architectures and training paradigms tailored to non-technical readers. We frame LLMs not as neutral tools but as epistemic infrastructures that encode assumptions about meaning, context, and similarity, conditioned by their training data, architecture, and patterns of use. We then examine how computational techniques enhanced by LLMs, such as structuring data, detecting patterns, and modeling dynamic processes, can be applied to support interpretive research in HPSS. Our analysis compares full-context and generative models, outlines strategies for domain and task adaptation (e.g., continued pretraining, fine-tuning, and retrieval-augmented generation), and evaluates their respective strengths and limitations for interpretive inquiry in HPSS. We conclude with four lessons for integrating LLMs into HPSS: (1) model selection involves interpretive trade-offs; (2) LLM literacy is foundational; (3) HPSS must define its own benchmarks and corpora; and (4) LLMs should enhance, not replace, interpretive methods.
Problem

Research questions and friction points this paper is trying to address.

LLMs' role in history, philosophy, and sociology of science research
Challenges of integrating computational and interpretive methods in HPSS
Evaluating LLMs' epistemic assumptions and infrastructural implications
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs process unstructured text effectively
LLMs as epistemic infrastructures encode assumptions
LLMs enhance interpretive research in HPSS
🔎 Similar Papers
No similar papers found.
Arno Simons
Arno Simons
Technische Universität Berlin
Digital Humanities | Science-Policy Nexus | Wikipedia | Mixed-Methods
M
Michael Zichert
Department of History and Philosophy of Modern Science, Technische Universität Berlin, Germany
A
Adrian Wuthrich
Department of History and Philosophy of Modern Science, Technische Universität Berlin, Germany