🤖 AI Summary
Existing evaluation methods struggle to effectively measure the security risks of code interpreter agents in dynamic execution, tool interaction, and multi-turn contextual settings. This work proposes CIBER—the first automated safety evaluation benchmark designed for real-world execution environments—which systematically assesses agent vulnerability to four attack types through dynamic adversarial attack generation, isolated sandboxing, and state-aware mechanisms. Experiments reveal that both interpreter architecture and model alignment jointly determine the security baseline, with specialized alignment models significantly outperforming general-purpose state-of-the-art models; natural language obfuscation attacks achieve a 14.1% higher success rate than code-based attacks; and current defense mechanisms are nearly ineffective against implicit semantic threats. This study presents the first dynamic security evaluation framework for code interpreter agents, uncovering critical blind spots in their safety profiles.
📝 Abstract
LLM-based code interpreter agents are increasingly deployed in critical workflows, yet their robustness against risks introduced by their code execution capabilities remains underexplored. Existing benchmarks are limited to static datasets or simulated environments, failing to capture the security risks arising from dynamic code execution, tool interactions, and multi-turn context. To bridge this gap, we introduce CIBER, an automated benchmark that combines dynamic attack generation, isolated secure sandboxing, and state-aware evaluation to systematically assess the vulnerability of code interpreter agents against four major types of adversarial attacks: Direct/Indirect Prompt Injection, Memory Poisoning, and Prompt-based Backdoor.
We evaluate six foundation models across two representative code interpreter agents (OpenInterpreter and OpenCodeInterpreter), incorporating a controlled study of identical models. Our results reveal that Interpreter Architecture and Model Alignment Set the Security Baseline. Structural integration enables aligned specialized models to outperform generic SOTA models. Conversely, high intelligence paradoxically increases susceptibility to complex adversarial prompts due to stronger instruction adherence. Furthermore, we identify a "Natural Language Disguise" Phenomenon, where natural language functions as a significantly more effective input modality than explicit code snippets (+14.1% ASR), thereby bypassing syntax-based defenses. Finally, we expose an alarming Security Polarization, where agents exhibit robust defenses against explicit threats yet fail catastrophically against implicit semantic hazards, highlighting a fundamental blind spot in current pattern-matching protection approaches.