🤖 AI Summary
This work addresses the insufficient evaluation of large language models (LLMs) on scientific text classification tasks. We present the first systematic benchmark of DeepSeek-R1 and GPT-4o on sentence-level scientific relation classification. To this end, we construct a high-quality, cross-disciplinary dataset of cleaned scientific text and propose a prompt-driven evaluation framework specifically designed for scientific relation identification—supporting zero-shot and few-shot classification. Our methodology integrates Web API invocation with structured output parsing to enable multi-dimensional analysis of consistency and robustness. Results show that GPT-4o exhibits superior stability in fine-grained relation recognition, while DeepSeek-R1 demonstrates notable potential in high-term-density contexts; both models are highly sensitive to prompt template design. This study fills a critical gap in the adaptability assessment of open-source scientific LLMs and validates the effectiveness and generalizability of our proposed evaluation paradigm.
📝 Abstract
This study examines how large language models categorize sentences from scientific papers using prompt engineering. We use two advanced web-based models, GPT-4o (by OpenAI) and DeepSeek R1, to classify sentences into predefined relationship categories. DeepSeek R1 has been tested on benchmark datasets in its technical report. However, its performance in scientific text categorization remains unexplored. To address this gap, we introduce a new evaluation method designed specifically for this task. We also compile a dataset of cleaned scientific papers from diverse domains. This dataset provides a platform for comparing the two models. Using this dataset, we analyze their effectiveness and consistency in categorization.