🤖 AI Summary
This study addresses the lack of systematic understanding of AI agent deployment in production environments. We conduct the largest empirical investigation to date—surveying 306 practitioners and conducting 20 in-depth case studies across 26 domains—using a mixed-methods approach (quantitative surveys + qualitative interviews). For the first time, we empirically characterize real-world technical choices, development paradigms, and evaluation practices for production-grade agents. Key findings include: 68% of production agents require human intervention within ten steps; 70% rely on prompt engineering rather than model fine-tuning; and 74% primarily use human evaluation. Simpler, more controllable methods dominate practice, yet reliability remains the foremost bottleneck. The study bridges the gap between academic research and industrial practice by establishing the first large-scale, evidence-based benchmark map and challenge framework for agent engineering.
📝 Abstract
AI agents are actively running in production across diverse industries, yet little is publicly known about which technical approaches enable successful real-world deployments. We present the first large-scale systematic study of AI agents in production, surveying 306 practitioners and conducting 20 in-depth case studies via interviews across 26 domains. We investigate why organizations build agents, how they build them, how they evaluate them, and what the top development challenges are. We find that production agents are typically built using simple, controllable approaches: 68% execute at most 10 steps before requiring human intervention, 70% rely on prompting off-the-shelf models instead of weight tuning, and 74% depend primarily on human evaluation. Reliability remains the top development challenge, driven by difficulties in ensuring and evaluating agent correctness. Despite these challenges, simple yet effective methods already enable agents to deliver impact across diverse industries. Our study documents the current state of practice and bridges the gap between research and deployment by providing researchers visibility into production challenges while offering practitioners proven patterns from successful deployments.