A Survey of Large Language Models in Discipline-specific Research: Challenges, Methods and Opportunities

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Systematic analysis and domain-adaptive deployment of large language models (LLMs) in interdisciplinary research remain underexplored. Method: We propose a discipline-aware LLM application taxonomy, integrating four technical paradigms—supervised fine-tuning, retrieval-augmented generation (RAG), agent-based architectures, and tool-integrated reasoning—and analyze their feasibility and disciplinary alignment across mathematics, physics, chemistry, biology, and the humanities/social sciences. Contribution/Results: The study identifies key cross-disciplinary challenges—including domain-specific knowledge depth and lack of standardized evaluation metrics—as well as emerging trends such as discipline-customized agents and enhanced interpretability. It further offers methodological recommendations for optimizing LLM adaptation in complex scholarly contexts. This framework provides researchers with a structured, actionable reference to guide effective and innovative LLM deployment across diverse academic domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated their transformative potential across numerous disciplinary studies, reshaping the existing research methodologies and fostering interdisciplinary collaboration. However, a systematic understanding of their integration into diverse disciplines remains underexplored. This survey paper provides a comprehensive overview of the application of LLMs in interdisciplinary studies, categorising research efforts from both a technical perspective and with regard to their applicability. From a technical standpoint, key methodologies such as supervised fine-tuning, retrieval-augmented generation, agent-based approaches, and tool-use integration are examined, which enhance the adaptability and effectiveness of LLMs in discipline-specific contexts. From the perspective of their applicability, this paper explores how LLMs are contributing to various disciplines including mathematics, physics, chemistry, biology, and the humanities and social sciences, demonstrating their role in discipline-specific tasks. The prevailing challenges are critically examined and the promising research directions are highlighted alongside the recent advances in LLMs. By providing a comprehensive overview of the technical developments and applications in this field, this survey aims to serve as an invaluable resource for the researchers who are navigating the complex landscape of LLMs in the context of interdisciplinary studies.
Problem

Research questions and friction points this paper is trying to address.

Systematic understanding of LLMs in diverse disciplines is lacking
Exploring technical methods to enhance LLMs' discipline-specific adaptability
Assessing LLMs' applicability across various academic fields
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supervised fine-tuning enhances LLM adaptability
Retrieval-augmented generation improves LLM effectiveness
Agent-based approaches integrate tools for LLMs
🔎 Similar Papers
No similar papers found.
Lu Xiang
Lu Xiang
Institute of Automation, Chinese Academy of Sciences
Dialogue SystemsNLP
Y
Yang Zhao
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Y
Yaping Zhang
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
C
Chengqing Zong
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China