Bridging Technology and Humanities: Evaluating the Impact of Large Language Models on Social Sciences Research with DeepSeek-R1

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the applicability and pedagogical suitability of large language models (LLMs) in empirical research within the humanities and social sciences. Focusing on DeepSeek-R1, it conducts multidisciplinary empirical experiments across seven domains: low-resource language translation, educational Q&A, academic writing assistance, logical reasoning, psychometric analysis, public health policy evaluation, and arts education. It introduces “self-generated reasoning process” as a novel, interpretable metric for assessing novice-friendly AI research assistants. Using a comparative experimental framework against o1-preview, the evaluation integrates domain-expert annotation, answer plausibility scoring, and explanation completeness assessment. Results demonstrate that DeepSeek-R1 achieves higher accuracy, produces clearer reasoning chains, and delivers more comprehensive explanations—particularly excelling in instructional support tasks. These findings validate its practical utility in enhancing research efficiency and broadening knowledge accessibility in the social sciences.

Technology Category

Application Category

📝 Abstract
In recent years, the development of Large Language Models (LLMs) has made significant breakthroughs in the field of natural language processing and has gradually been applied to the field of humanities and social sciences research. LLMs have a wide range of application value in the field of humanities and social sciences because of its strong text understanding, generation and reasoning capabilities. In humanities and social sciences research, LLMs can analyze large-scale text data and make inferences. This article analyzes the large language model DeepSeek-R1 from seven aspects: low-resource language translation, educational question-answering, student writing improvement in higher education, logical reasoning, educational measurement and psychometrics, public health policy analysis, and art education.Then we compare the answers given by DeepSeek-R1 in the seven aspects with the answers given by o1-preview. DeepSeek-R1 performs well in the humanities and social sciences, answering most questions correctly and logically, and can give reasonable analysis processes and explanations. Compared with o1-preview, it can automatically generate reasoning processes and provide more detailed explanations, which is suitable for beginners or people who need to have a detailed understanding of this knowledge, while o1-preview is more suitable for quick reading. Through analysis, it is found that LLM has broad application potential in the field of humanities and social sciences, and shows great advantages in improving text analysis efficiency, language communication and other fields. LLM's powerful language understanding and generation capabilities enable it to deeply explore complex problems in the field of humanities and social sciences, and provide innovative tools for academic research and practical applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating DeepSeek-R1's impact on humanities and social sciences research.
Comparing DeepSeek-R1 with o1-preview in seven application areas.
Exploring LLMs' potential in improving text analysis and language communication.
Innovation

Methods, ideas, or system contributions that make the work stand out.

DeepSeek-R1 excels in low-resource language translation.
DeepSeek-R1 improves student writing in higher education.
DeepSeek-R1 provides detailed logical reasoning explanations.
🔎 Similar Papers
No similar papers found.
P
Peiran Gu
School of Physics and Information Technology, Shaanxi Normal University, Xi’an, China
F
Fuhao Duan
School of Physics and Information Technology, Shaanxi Normal University, Xi’an, China
W
Wenhao Li
School of Physics and Information Technology, Shaanxi Normal University, Xi’an, China
B
Bochen Xu
School of Physics and Information Technology, Shaanxi Normal University, Xi’an, China
Ying Cai
Ying Cai
Associate Professor, Department of Computer Science, Iowa State University
data privacy and confidentialityquery authentication and correctionmobile object managmentmultimedia communications
C
Chenxun Zhuo
School of Foreign Languages, Northwest University, Xi’an, China
Tianming Liu
Tianming Liu
Distinguished Research Professor of Computer Science, University of Georgia
BrainBrain-Inspired AILLMArtificial General IntelligenceQuantum AI