🤖 AI Summary
This study systematically evaluates the applicability and pedagogical suitability of large language models (LLMs) in empirical research within the humanities and social sciences. Focusing on DeepSeek-R1, it conducts multidisciplinary empirical experiments across seven domains: low-resource language translation, educational Q&A, academic writing assistance, logical reasoning, psychometric analysis, public health policy evaluation, and arts education. It introduces “self-generated reasoning process” as a novel, interpretable metric for assessing novice-friendly AI research assistants. Using a comparative experimental framework against o1-preview, the evaluation integrates domain-expert annotation, answer plausibility scoring, and explanation completeness assessment. Results demonstrate that DeepSeek-R1 achieves higher accuracy, produces clearer reasoning chains, and delivers more comprehensive explanations—particularly excelling in instructional support tasks. These findings validate its practical utility in enhancing research efficiency and broadening knowledge accessibility in the social sciences.
📝 Abstract
In recent years, the development of Large Language Models (LLMs) has made significant breakthroughs in the field of natural language processing and has gradually been applied to the field of humanities and social sciences research. LLMs have a wide range of application value in the field of humanities and social sciences because of its strong text understanding, generation and reasoning capabilities. In humanities and social sciences research, LLMs can analyze large-scale text data and make inferences. This article analyzes the large language model DeepSeek-R1 from seven aspects: low-resource language translation, educational question-answering, student writing improvement in higher education, logical reasoning, educational measurement and psychometrics, public health policy analysis, and art education.Then we compare the answers given by DeepSeek-R1 in the seven aspects with the answers given by o1-preview. DeepSeek-R1 performs well in the humanities and social sciences, answering most questions correctly and logically, and can give reasonable analysis processes and explanations. Compared with o1-preview, it can automatically generate reasoning processes and provide more detailed explanations, which is suitable for beginners or people who need to have a detailed understanding of this knowledge, while o1-preview is more suitable for quick reading. Through analysis, it is found that LLM has broad application potential in the field of humanities and social sciences, and shows great advantages in improving text analysis efficiency, language communication and other fields. LLM's powerful language understanding and generation capabilities enable it to deeply explore complex problems in the field of humanities and social sciences, and provide innovative tools for academic research and practical applications.