SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Systematic assessment of biosafety risks associated with protein foundation models remains lacking, raising concerns about potential misuse for generating harmful proteins. Method: We introduce the first red-teaming framework specifically designed for protein foundation models, integrating multimodal prompt engineering with a heuristic beam search algorithm to construct a manually annotated benchmark dataset and a standardized evaluation protocol. Results: Empirical evaluation on state-of-the-art models—including ESM3—reveals an alarming 70% jailbreaking success rate, enabling reliable generation of protein sequences with plausible biosafety hazards. This work not only uncovers critical security vulnerabilities in current protein AI systems but also establishes a reproducible, scalable safety evaluation paradigm for AI for Science. By providing both methodological foundations and technical infrastructure, our framework advances responsible development of protein AI.

Technology Category

Application Category

📝 Abstract
Proteins play crucial roles in almost all biological processes. The advancement of deep learning has greatly accelerated the development of protein foundation models, leading to significant successes in protein understanding and design. However, the lack of systematic red-teaming for these models has raised serious concerns about their potential misuse, such as generating proteins with biological safety risks. This paper introduces SafeProtein, the first red-teaming framework designed for protein foundation models to the best of our knowledge. SafeProtein combines multimodal prompt engineering and heuristic beam search to systematically design red-teaming methods and conduct tests on protein foundation models. We also curated SafeProtein-Bench, which includes a manually constructed red-teaming benchmark dataset and a comprehensive evaluation protocol. SafeProtein achieved continuous jailbreaks on state-of-the-art protein foundation models (up to 70% attack success rate for ESM3), revealing potential biological safety risks in current protein foundation models and providing insights for the development of robust security protection technologies for frontier models. The codes will be made publicly available at https://github.com/jigang-fan/SafeProtein.
Problem

Research questions and friction points this paper is trying to address.

Systematically testing protein foundation models for safety risks
Addressing potential misuse in generating hazardous proteins
Developing red-teaming framework to evaluate biological security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal prompt engineering for red-teaming
Heuristic beam search method implementation
Manually constructed benchmark dataset creation