🤖 AI Summary
This study addresses a significant misalignment between current AI agent development and real-world labor market demands, where research disproportionately emphasizes programming tasks while neglecting the broader spectrum of mainstream occupations. By systematically mapping 43 benchmarks and 72,342 tasks onto the skill requirements and work domains of 1,016 U.S. occupations—and integrating labor market data with assessments of agent autonomy—the work provides the first quantitative evidence of this disconnect. Based on empirical analysis, the authors propose three measurable principles for benchmark design: coverage, realism, and fine-grained granularity. These principles offer both theoretical grounding and practical guidance for developing and evaluating AI agents capable of operating effectively in authentic workplace contexts.
📝 Abstract
AI agents are increasingly developed and evaluated on benchmarks relevant to human work, yet it remains unclear how representative these benchmarking efforts are of the labor market as a whole. In this work, we systematically study the relationship between agent development efforts and the distribution of real-world human work by mapping benchmark instances to work domains and skills. We first analyze 43 benchmarks and 72,342 tasks, measuring their alignment with human employment and capital allocation across all 1,016 real-world occupations in the U.S. labor market. We reveal substantial mismatches between agent development that tends to be programming-centric, and the categories in which human labor and economic value are concentrated. Within work areas that agents currently target, we further characterize current agent utility by measuring their autonomy levels, providing practical guidance for agent interaction strategies across work scenarios. Building on these findings, we propose three measurable principles for designing benchmarks that better capture socially important and technically challenging forms of work: coverage, realism, and granular evaluation.