AI Must not be Fully Autonomous

šŸ“… 2025-07-31
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Amid growing anticipation of artificial superintelligence (ASI), fully autonomous AI—particularly Level-3 autonomy—poses systemic risks that remain inadequately characterized and governed. Method: This paper proposes a three-tiered AI autonomy classification framework, integrating autonomy theory, agent theory, and philosophy–technology interdisciplinary analysis to enable hierarchical modeling and rigorous risk assessment. It systematically demonstrates the infeasibility of full AI autonomy, substantiating 12 supporting arguments, addressing and refuting 6 prominent counterarguments, and grounding its analysis in 15 recent empirical findings on AI value misalignment. Contribution/Results: The study introduces the first operational, tiered governance model for AI autonomy; establishes human responsible oversight as indispensable—not merely supplementary—in ASI development; and advances a novel governance paradigm that balances theoretical rigor with policy feasibility for global AI regulation.

Technology Category

Application Category

šŸ“ Abstract
Autonomous Artificial Intelligence (AI) has many benefits. It also has many risks. In this work, we identify the 3 levels of autonomous AI. We are of the position that AI must not be fully autonomous because of the many risks, especially as artificial superintelligence (ASI) is speculated to be just decades away. Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight. However, responsible human oversight is crucial for mitigating the risks. To ague for our position, we discuss theories of autonomy, AI and agents. Then, we offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments. We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.
Problem

Research questions and friction points this paper is trying to address.

AI must avoid full autonomy due to risks
Human oversight is crucial for AI safety
Addressing risks of misaligned AI values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identify 3 levels of autonomous AI
Advocate against fully autonomous AI
Propose responsible human oversight
šŸ”Ž Similar Papers
No similar papers found.
T
Tosin Adewumi
Machine Learning Group, EISLAB, LuleƄ University of Technology, Sweden
Lama Alkhaled
Lama Alkhaled
Project Manager at ProcessIT Innovation, LuleƄ University of Technology
Machine learningComputer visionNLP
F
Florent Imbert
Machine Learning Group, EISLAB, LuleƄ University of Technology, Sweden
H
Hui Han
Machine Learning Group, EISLAB, LuleƄ University of Technology, Sweden
N
Nudrat Habib
Machine Learning Group, EISLAB, LuleƄ University of Technology, Sweden
Karl Lƶwenmark
Karl Lƶwenmark
Ph.D. student
Machine LearningNatural Language ProcessingTechnical Language ProcessingIntelligent Fault Diagnosis