š¤ AI Summary
Amid growing anticipation of artificial superintelligence (ASI), fully autonomous AIāparticularly Level-3 autonomyāposes systemic risks that remain inadequately characterized and governed.
Method: This paper proposes a three-tiered AI autonomy classification framework, integrating autonomy theory, agent theory, and philosophyātechnology interdisciplinary analysis to enable hierarchical modeling and rigorous risk assessment. It systematically demonstrates the infeasibility of full AI autonomy, substantiating 12 supporting arguments, addressing and refuting 6 prominent counterarguments, and grounding its analysis in 15 recent empirical findings on AI value misalignment.
Contribution/Results: The study introduces the first operational, tiered governance model for AI autonomy; establishes human responsible oversight as indispensableānot merely supplementaryāin ASI development; and advances a novel governance paradigm that balances theoretical rigor with policy feasibility for global AI regulation.
š Abstract
Autonomous Artificial Intelligence (AI) has many benefits. It also has many risks. In this work, we identify the 3 levels of autonomous AI. We are of the position that AI must not be fully autonomous because of the many risks, especially as artificial superintelligence (ASI) is speculated to be just decades away. Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight. However, responsible human oversight is crucial for mitigating the risks. To ague for our position, we discuss theories of autonomy, AI and agents. Then, we offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments. We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.