Capture the Flags: Family-Based Evaluation of Agentic LLMs via Semantics-Preserving Transformations

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing CTF benchmarks in evaluating the robustness and generalization of large language model (LLM) agents across semantically equivalent yet syntactically diverse code variants. To this end, the authors introduce the concept of “CTF challenge families,” which generate structurally perturbed variants from a single challenge through semantics-preserving program transformations—such as identifier renaming, code insertion, and compositional obfuscation. Integrating these transformations with Cybench and Intercode, they construct Evolve-CTF, a novel evaluation framework that enables the first systematic and controlled assessment of 13 embodied LLM configurations under a fixed exploitation policy. Experimental results reveal that while models exhibit high robustness to simple transformations, their performance degrades significantly under compositional transformations and deep obfuscation; moreover, explicit reasoning provides only marginal gains in success rates.

Technology Category

Application Category

📝 Abstract
Agentic large language models (LLMs) are increasingly evaluated on cybersecurity tasks using capture-the-flag (CTF) benchmarks. However, existing pointwise benchmarks have limited ability to shed light on the robustness and generalisation abilities of agents across alternative versions of the source code. We introduce CTF challenge families, whereby a single CTF is used as the basis for generating a family of semantically-equivalent challenges via semantics-preserving program transformations. This enables controlled evaluation of agent robustness to source code transformations while keeping the underlying exploit strategy fixed. We introduce a new tool, Evolve-CTF, that generates CTF families from Python challenges using a range of transformations. Using Evolve-CTF to derive families from Cybench and Intercode challenges, we evaluate 13 agentic LLM configurations with tool access. We find that models are remarkably robust to intrusive renaming and code insertion-based transformations, but that composed transformations and deeper obfuscation affect performance by requiring more sophisticated use of tools. We also find that enabling explicit reasoning has little effect on solution success rates across challenge families. Our work contributes a valuable technique and tool for future LLM evaluations, and a large dataset characterising the capabilities of current state-of-the-art models in this domain.
Problem

Research questions and friction points this paper is trying to address.

capture-the-flag
agentic LLMs
robustness
generalization
semantics-preserving transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantics-preserving transformations
CTF challenge families
agentic LLM evaluation
Evolve-CTF
code obfuscation robustness
🔎 Similar Papers
No similar papers found.
S
Shahin Honarvar
Department of Computing, Imperial College London, UK
A
Amber Gorzynski
Department of Computing, Imperial College London, UK
J
James Lee-Jones
Department of Computing, Imperial College London, UK
Harry Coppock
Harry Coppock
Imperial College London
Deep LearningSignal ProcessingAudioRepresentation LearningQuantisation
Marek Rei
Marek Rei
Associate Professor, Imperial College London
Artificial IntelligenceLanguage ModelingMachine LearningNatural Language Processing
Joseph Ryan
Joseph Ryan
Pacific Northwest National Laboratory
glass sciencematerials sciencematerials characterization
A
Alastair F. Donaldson
Department of Computing, Imperial College London, UK