🤖 AI Summary
Current AI governance relies heavily on external human oversight, lacking mechanisms for autonomous legal compliance. Method: This paper proposes an “architecture-as-regulation” framework that embeds legal norms directly into AI decision-making by modeling law as a structural scaffold for rational choice under uncertainty. It integrates active inference (AIF), Bayesian updating, Markov decision processes (MDPs), and economic legal analysis (ELA) to construct AI agents with intentional reasoning and normative sensitivity. A novel safety-valve mechanism enables context-dependent preference modeling and dynamic trade-offs among competing objectives. Contribution/Results: Empirical evaluation in autonomous vehicle right-of-way scenarios demonstrates that the framework enables real-time reconciliation of legal constraints with operational goals, significantly improving legal alignment and risk controllability of AI behavior without sacrificing task performance.
📝 Abstract
This paper presents a computational account of how legal norms can influence the behavior of artificial intelligence (AI) agents, grounded in the active inference framework (AIF) that is informed by principles of economic legal analysis (ELA). The ensuing model aims to capture the complexity of human decision-making under legal constraints, offering a candidate mechanism for agent governance in AI systems, that is, the (auto)regulation of AI agents themselves rather than human actors in the AI industry. We propose that lawful and norm-sensitive AI behavior can be achieved through regulation by design, where agents are endowed with intentional control systems, or behavioral safety valves, that guide real-time decisions in accordance with normative expectations. To illustrate this, we simulate an autonomous driving scenario in which an AI agent must decide when to yield the right of way by balancing competing legal and pragmatic imperatives. The model formalizes how AIF can implement context-dependent preferences to resolve such conflicts, linking this mechanism to the conception of law as a scaffold for rational decision-making under uncertainty. We conclude by discussing how context-dependent preferences could function as safety mechanisms for autonomous agents, enhancing lawful alignment and risk mitigation in AI governance.