🤖 AI Summary
Large language model (LLM)-based agents pose significant ethical and safety risks in accelerating scientific discovery. To address this, we propose the “Risk-Aware AI Scientist” framework—a novel, end-to-end safety defense architecture for AI-driven scientific research. Our approach introduces SciSafetyBench, the first safety evaluation benchmark tailored to scientific workflows, comprising 240 high-risk tasks, 30 scientific tools, and 120 tool-use risk scenarios. The framework integrates four core components: prompt-level monitoring, multi-agent collaborative oversight, tool invocation auditing, and dynamic ethical review, augmented by adversarial robustness validation. Experimental results demonstrate a 35% improvement in safety performance—measured via risk detection and mitigation accuracy—while preserving scientific output quality with zero degradation. Moreover, the framework exhibits strong robustness against diverse adversarial attacks, including jailbreaking, prompt injection, and tool misuse. This work establishes a foundational methodology for trustworthy, safety-first AI-assisted science.
📝 Abstract
Recent advancements in large language model (LLM) agents have significantly accelerated scientific discovery automation, yet concurrently raised critical ethical and safety concerns. To systematically address these challenges, we introduce extbf{SafeScientist}, an innovative AI scientist framework explicitly designed to enhance safety and ethical responsibility in AI-driven scientific exploration. SafeScientist proactively refuses ethically inappropriate or high-risk tasks and rigorously emphasizes safety throughout the research process. To achieve comprehensive safety oversight, we integrate multiple defensive mechanisms, including prompt monitoring, agent-collaboration monitoring, tool-use monitoring, and an ethical reviewer component. Complementing SafeScientist, we propose extbf{SciSafetyBench}, a novel benchmark specifically designed to evaluate AI safety in scientific contexts, comprising 240 high-risk scientific tasks across 6 domains, alongside 30 specially designed scientific tools and 120 tool-related risk tasks. Extensive experiments demonstrate that SafeScientist significantly improves safety performance by 35% compared to traditional AI scientist frameworks, without compromising scientific output quality. Additionally, we rigorously validate the robustness of our safety pipeline against diverse adversarial attack methods, further confirming the effectiveness of our integrated approach. The code and data will be available at https://github.com/ulab-uiuc/SafeScientist. extcolor{red}{Warning: this paper contains example data that may be offensive or harmful.}