Economics of Human and AI Collaboration: When is Partial Automation More Attractive than Full Automation?

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitation of current automated decision-making systems, which often rely on binary “all-or-nothing” choices. The authors propose treating automation intensity as a continuous variable and develop a unified analytical framework that integrates AI scaling laws to estimate production functions, employs entropy-based metrics to quantify task complexity, and leverages O*NET data, expert surveys, and GPT-4o–based task decomposition for calibration and validation in computer vision applications. Their findings demonstrate that partial automation frequently yields the lowest-cost solution, with enterprise-scale deployment potentially substituting approximately 11% of labor compensation costs. Moreover, economies of scale substantially enhance the economic viability of automation. This framework provides both theoretical grounding and empirical support for optimizing human–machine collaboration at a granular level.
📝 Abstract
This paper develops a unified framework for evaluating the optimal degree of task automation. Moving beyond binary automate-or-not assessments, we model automation intensity as a continuous choice in which firms minimize costs by selecting an AI accuracy level, from no automation through partial human-AI collaboration to full automation. On the supply side, we estimate an AI production function via scaling-law experiments linking performance to data, compute, and model size. Because AI systems exhibit predictable but diminishing returns to these inputs, the cost of higher accuracy is convex: good performance may be inexpensive, but near-perfect accuracy is disproportionately costly. Full automation is therefore often not cost-minimizing; partial automation, where firms retain human workers for residual tasks, frequently emerges as the equilibrium. On the demand side, we introduce an entropy-based measure of task complexity that maps model accuracy into a labor substitution ratio, quantifying human labor displacement at each accuracy level. We calibrate the framework with O*NET task data, a survey of 3,778 domain experts, and GPT-4o-derived task decompositions, implementing it in computer vision. Task complexity shapes substitution: low-complexity tasks see high substitution, while high-complexity tasks favor limited partial automation. Scale of deployment is a key determinant: AI-as-a-Service and AI agents spread fixed costs across users, sharply expanding economically viable tasks. At the firm level, cost-effective automation captures approximately 11% of computer-vision-exposed labor compensation; under economy-wide deployment, this share rises sharply. Since other AI systems exhibit similar scaling-law economics, our mechanisms extend beyond computer vision, reinforcing that partial automation is often the economically rational long-run outcome, not merely a transitional phase.
Problem

Research questions and friction points this paper is trying to address.

automation
human-AI collaboration
task complexity
scaling laws
labor substitution
Innovation

Methods, ideas, or system contributions that make the work stand out.

partial automation
scaling laws
task complexity
labor substitution
AI production function
W
Wensu Li
Massachusetts Institute of Technology
A
Atin Aboutorabi
École Polytechnique Fédérale de Lausanne
H
Harry Lyu
Massachusetts Institute of Technology
Kaizhi Qian
Kaizhi Qian
MIT-IBM Watson AI Lab
speech processingdeep learning
M
Martin Fleming
Massachusetts Institute of Technology
B
Brian C. Goehring
IBM’s Institute for Business Value
Neil Thompson
Neil Thompson
Director, MIT FutureTech at Computer Science and A.I. Lab and the Initiative on the Digital Economy
Moore's Law and Computer PerformanceTools and InnovationPatenting & LicensingExecuting on