🤖 AI Summary
Existing general-purpose safety benchmarks struggle to effectively evaluate the reliability, safety, and abuse resistance of large language models (LLMs) in scientific applications due to domain mismatch and insufficient threat coverage. This work proposes a novel LLM safety paradigm tailored for scientific contexts: it first establishes the first threat taxonomy specific to scientific research; then introduces an automated adversarial evaluation benchmark generated via a multi-agent system; and further integrates red-teaming, embedded safety agents, and external boundary controls into a multi-layered defense framework. By offering a systematic approach to risk identification, assessment, and mitigation, this study provides a comprehensive solution for deploying LLMs in scientific domains and addresses the critical gap in domain-specific safety evaluation and defense mechanisms.
📝 Abstract
As large language models (LLMs) evolve into autonomous "AI scientists," they promise transformative advances but introduce novel vulnerabilities, from potential "biosafety risks" to "dangerous explosions." Ensuring trustworthy deployment in science requires a new paradigm centered on reliability (ensuring factual accuracy and reproducibility), safety (preventing unintentional physical or biological harm), and security (preventing malicious misuse). Existing general-purpose safety benchmarks are poorly suited for this purpose, suffering from a fundamental domain mismatch, limited threat coverage of science-specific vectors, and benchmark overfitting, which create a critical gap in vulnerability evaluation for scientific applications. This paper examines the unique security and safety landscape of LLM agents in science. We begin by synthesizing a detailed taxonomy of LLM threats contextualized for scientific research, to better understand the unique risks associated with LLMs in science. Next, we conceptualize a mechanism to address the evaluation gap by utilizing dedicated multi-agent systems for the automated generation of domain-specific adversarial security benchmarks. Based on our analysis, we outline how existing safety methods can be brought together and integrated into a conceptual multilayered defense framework designed to combine a red-teaming exercise and external boundary controls with a proactive internal Safety LLM Agent. Together, these conceptual elements provide a necessary structure for defining, evaluating, and creating comprehensive defense strategies for trustworthy LLM agent deployment in scientific disciplines.