🤖 AI Summary
Systematic analysis and domain-adaptive deployment of large language models (LLMs) in interdisciplinary research remain underexplored. Method: We propose a discipline-aware LLM application taxonomy, integrating four technical paradigms—supervised fine-tuning, retrieval-augmented generation (RAG), agent-based architectures, and tool-integrated reasoning—and analyze their feasibility and disciplinary alignment across mathematics, physics, chemistry, biology, and the humanities/social sciences. Contribution/Results: The study identifies key cross-disciplinary challenges—including domain-specific knowledge depth and lack of standardized evaluation metrics—as well as emerging trends such as discipline-customized agents and enhanced interpretability. It further offers methodological recommendations for optimizing LLM adaptation in complex scholarly contexts. This framework provides researchers with a structured, actionable reference to guide effective and innovative LLM deployment across diverse academic domains.
📝 Abstract
Large Language Models (LLMs) have demonstrated their transformative potential across numerous disciplinary studies, reshaping the existing research methodologies and fostering interdisciplinary collaboration. However, a systematic understanding of their integration into diverse disciplines remains underexplored. This survey paper provides a comprehensive overview of the application of LLMs in interdisciplinary studies, categorising research efforts from both a technical perspective and with regard to their applicability. From a technical standpoint, key methodologies such as supervised fine-tuning, retrieval-augmented generation, agent-based approaches, and tool-use integration are examined, which enhance the adaptability and effectiveness of LLMs in discipline-specific contexts. From the perspective of their applicability, this paper explores how LLMs are contributing to various disciplines including mathematics, physics, chemistry, biology, and the humanities and social sciences, demonstrating their role in discipline-specific tasks. The prevailing challenges are critically examined and the promising research directions are highlighted alongside the recent advances in LLMs. By providing a comprehensive overview of the technical developments and applications in this field, this survey aims to serve as an invaluable resource for the researchers who are navigating the complex landscape of LLMs in the context of interdisciplinary studies.