Red-Teaming Claude Opus and ChatGPT-based Security Advisors for Trusted Execution Environments

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the emerging socio-technical risks posed by large language models (LLMs) acting as trusted execution environment (TEE) security advisors, which may generate unsafe recommendations due to technical hallucinations, overconfidence, or adversarial prompting. We propose TEE-RedBench, the first red-teaming evaluation framework tailored for LLM-based TEE security advisors, featuring a structured prompt suite and multidimensional annotation criteria—including technical correctness, verifiability, and uncertainty calibration—to systematically assess failure modes in architectural review, threat modeling, and mitigation advice from Claude Opus and ChatGPT. Our study reveals, for the first time, that unsafe LLM-generated security advice exhibits cross-model transferability (up to 12.02%). To mitigate this, we design an “LLM-in-the-loop” security enhancement pipeline integrating policy gating, retrieval augmentation, and lightweight verification, reducing the failure rate of security recommendations by 80.62%.

Technology Category

Application Category

📝 Abstract
Trusted Execution Environments (TEEs) (e.g., Intel SGX and ArmTrustZone) aim to protect sensitive computation from a compromised operating system, yet real deployments remain vulnerable to microarchitectural leakage, side-channel attacks, and fault injection. In parallel, security teams increasingly rely on Large Language Model (LLM) assistants as security advisors for TEE architecture review, mitigation planning, and vulnerability triage. This creates a socio-technical risk surface: assistants may hallucinate TEE mechanisms, overclaim guarantees (e.g., what attestation does and does not establish), or behave unsafely under adversarial prompting. We present a red-teaming study of two prevalently deployed LLM assistants in the role of TEE security advisors: ChatGPT-5.2 and Claude Opus-4.6, focusing on the inherent limitations and transferability of prompt-induced failures across LLMs. We introduce TEE-RedBench, a TEE-grounded evaluation methodology comprising (i) a TEE-specific threat model for LLM-mediated security work, (ii) a structured prompt suite spanning SGX and TrustZone architecture, attestation and key management, threat modeling, and non-operational mitigation guidance, along with policy-bound misuse probes, and (iii) an annotation rubric that jointly measures technical correctness, groundedness, uncertainty calibration, refusal quality, and safe helpfulness. We find that some failures are not purely idiosyncratic, transferring up to 12.02% across LLM assistants, and we connect these outcomes to secure architecture by outlining an "LLM-in-the-loop" evaluation pipeline: policy gating, retrieval grounding, structured templates, and lightweight verification checks that, when combined, reduce failures by 80.62%.
Problem

Research questions and friction points this paper is trying to address.

Trusted Execution Environments
Large Language Models
red-teaming
security advisors
socio-technical risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

TEE-RedBench
red-teaming
large language models
trusted execution environments
prompt-induced failures
🔎 Similar Papers
No similar papers found.