Aegis: Towards Governance, Integrity, and Security of AI Voice Agents

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical security threats—such as privacy leakage, privilege escalation, and resource abuse—faced by AI voice agents in high-risk environments, where existing research lacks a systematic evaluation framework. To bridge this gap, the authors propose Aegis, the first comprehensive red-teaming framework tailored for voice agents. Aegis simulates real-world deployment pipelines to construct structured adversarial scenarios, integrating adversarial testing, risk modeling, access control analysis, and behavioral monitoring to holistically assess risks across governance, integrity, and security dimensions. Empirical evaluations in banking, IT support, and logistics settings reveal that even with stringent access controls, voice agents remain vulnerable to behavior-level attacks that circumvent defenses; notably, open-source models exhibit heightened susceptibility, underscoring the urgent need for multi-layered defense mechanisms.

Technology Category

Application Category

📝 Abstract
With the rapid advancement and adoption of Audio Large Language Models (ALLMs), voice agents are now being deployed in high-stakes domains such as banking, customer service, and IT support. However, their vulnerabilities to adversarial misuse still remain unexplored. While prior work has examined aspects of trustworthiness in ALLMs, such as harmful content generation and hallucination, systematic security evaluations of voice agents are still lacking. To address this gap, we propose Aegis, a red-teaming framework for the governance, integrity, and security of voice agents. Aegis models the realistic deployment pipeline of voice agents and designs structured adversarial scenarios of critical risks, including privacy leakage, privilege escalation, resource abuse, etc. We evaluate the framework through case studies in banking call centers, IT Support, and logistics. Our evaluation shows that while access controls mitigate data-level risks, voice agents remain vulnerable to behavioral attacks that cannot be addressed through access restrictions alone, even under strict access controls. We observe systematic differences across model families, with open-weight models exhibiting higher susceptibility, underscoring the need for layered defenses that combine access control, policy enforcement, and behavioral monitoring to secure next-generation voice agents.
Problem

Research questions and friction points this paper is trying to address.

voice agents
adversarial misuse
security evaluation
privacy leakage
privilege escalation
Innovation

Methods, ideas, or system contributions that make the work stand out.

red-teaming framework
voice agent security
adversarial scenarios
behavioral attacks
Audio Large Language Models
🔎 Similar Papers
No similar papers found.