🤖 AI Summary
Open-source large language models (LLMs) lack production-grade robustness against prompt injection attacks. Method: This paper introduces Meta-SecAlign—the first open-source, open-weight LLM with built-in security alignment—achieved via joint instruction tuning and safety objective alignment within an enhanced SecAlign framework, enabling efficient and robust optimization without task-specific fine-tuning. Contribution/Results: Trained solely on general instruction data, Meta-SecAlign generalizes effectively to unseen downstream tasks (e.g., tool use, agent-based web navigation). Empirical evaluation shows that Meta-SecAlign-70B achieves state-of-the-art robustness across nine utility benchmarks and seven security benchmarks, matching the utility of protected closed-source commercial models while significantly advancing co-evolutionary research in adversarial robustness and practical deployment of secure LLMs.
📝 Abstract
Prompt injection attacks pose a significant security threat to LLM-integrated applications. Model-level defenses have shown strong effectiveness, but are currently deployed into commercial-grade models in a closed-source manner. We believe open-source models are needed by the AI security community, where co-development of attacks and defenses through open research drives scientific progress in mitigation against prompt injection attacks. To this end, we develop Meta SecAlign, the first open-source and open-weight LLM with built-in model-level defense that achieves commercial-grade model performance. We provide complete details of our training recipe, which utilizes an improved version of the SOTA SecAlign defense. Evaluations on 9 utility benchmarks and 7 security benchmarks show that Meta SecAlign, despite being trained on a generic instruction-tuning dataset, confers security in unseen downstream tasks, including tool-calling and agentic web navigation, in addition general instruction-following. Our best model -- Meta-SecAlign-70B -- achieves state-of-the-art robustness against prompt injection attacks and comparable utility to closed-source commercial LLM with model-level defense.