🤖 AI Summary
Existing black-box evaluation methods struggle to effectively assess the system-level dynamics and engineering characteristics of agent architectures in AI-Native systems. This work proposes the first white-box benchmark suite tailored for AI-Native systems, adopting an application-centric perspective grounded in the Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication standards. By treating agent interactions as first-class citizens in distributed tracing, the framework enables fine-grained engineering analysis. Experiments across 21 system variants uncover several counterintuitive findings—such as lightweight models exhibiting better protocol adherence, inference overhead dominating performance bottlenecks, and self-repair mechanisms potentially increasing operational costs—offering empirical guidance for building reliable AI-Native systems. The associated tools and datasets have been open-sourced.
📝 Abstract
The transition from Cloud-Native to AI-Native architectures is fundamentally reshaping software engineering, replacing deterministic microservices with probabilistic agentic services. However, this shift renders traditional black-box evaluation paradigms insufficient: existing benchmarks measure raw model capabilities while remaining blind to system-level execution dynamics. To bridge this gap, we introduce AI-NativeBench, the first application-centric and white-box AI-Native benchmark suite grounded in Model Context Protocol (MCP) and Agent-to-Agent (A2A) standards. By treating agentic spans as first-class citizens within distributed traces, our methodology enables granular analysis of engineering characteristics beyond simple capabilities. Leveraging this benchmark across 21 system variants, we uncover critical engineering realities invisible to traditional metrics: a parameter paradox where lightweight models often surpass flagships in protocol adherence, a pervasive inference dominance that renders protocol overhead secondary, and an expensive failure pattern where self-healing mechanisms paradoxically act as cost multipliers on unviable workflows. This work provides the first systematic evidence to guide the transition from measuring model capability to engineering reliable AI-Native systems. To facilitate reproducibility and further research, we have open-sourced the benchmark and dataset.