Position: AI agents should be regulated based on autonomous action sequences

📅 2025-02-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI regulation predominantly relies on indirect proxies—such as computational resources or parameter count—rendering it inadequate for detecting existential risks (e.g., human extinction) arising from highly autonomous AI agents capable of long-horizon planning and strategic reasoning. Method: We propose action sequences autonomously generated by AI agents as the core regulatory anchor, developing a risk assessment framework grounded in action-logic modeling, integrating AI safety governance theory with existential risk analysis. Contribution: This work introduces, for the first time, agent-generated action sequences as directly observable and verifiable regulatory evidence—moving beyond traditional paradigms dependent on model scale or environmental feedback. It establishes novel regulatory principles tailored to high-autonomy AI agents and delivers an operationally feasible, environment-agnostic, and empirically verifiable pathway for existential risk evaluation, thereby providing a foundational methodology for international AI governance.

Technology Category

Application Category

📝 Abstract
This position paper argues that AI agents should be regulated based on the sequence of actions they autonomously take. AI agents with long-term planning and strategic capabilities can pose significant risks of human extinction and irreversible global catastrophes. While existing regulations often focus on computational scale as a proxy for potential harm, we contend that such measures are insufficient for assessing the risks posed by AI agents whose capabilities arise primarily from inference-time computation. To support our position, we discuss relevant regulations and recommendations from AI scientists regarding existential risks, as well as the advantages of action sequences over existing impact measures that require observing environmental states.
Problem

Research questions and friction points this paper is trying to address.

Regulating AI agents by their autonomy level
Addressing risks from AI's long-term strategic capabilities
Proposing action sequences as better autonomy metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regulate AI by autonomy extent
Focus on action sequences metrics
Assess risks via inference-time computation
🔎 Similar Papers
No similar papers found.