Out of Control -- Why Alignment Needs Formal Control Theory (and an Alignment Control Stack)

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI alignment research lacks a formal optimal control-theoretic foundation and neglects interoperability among alignment protocols, resulting in inadequate theoretical characterization of controllability and safety boundaries for frontier models and agentic AI systems. This paper introduces formal optimal control theory into the AI alignment framework for the first time, constructing a hierarchical, interoperable alignment control stack spanning physical, algorithmic, system, and socio-technical layers. The stack provides verifiable and scalable control interfaces that enable cross-protocol coordinated governance. Through layered modeling, mechanism interpretability analysis, and formal interoperability specifications, we establish a unified framework for controllability analysis. This significantly advances the theoretical understanding of safety boundaries and risk propagation pathways in complex AI systems, thereby furnishing a rigorous foundation for regulatory assessment and technical governance.

Technology Category

Application Category

📝 Abstract
This position paper argues that formal optimal control theory should be central to AI alignment research, offering a distinct perspective from prevailing AI safety and security approaches. While recent work in AI safety and mechanistic interpretability has advanced formal methods for alignment, they often fall short of the generalisation required of control frameworks for other technologies. There is also a lack of research into how to render different alignment/control protocols interoperable. We argue that by recasting alignment through principles of formal optimal control and framing alignment in terms of hierarchical stack from physical to socio-technical layers according to which controls may be applied we can develop a better understanding of the potential and limitations for controlling frontier models and agentic AI systems. To this end, we introduce an Alignment Control Stack which sets out a hierarchical layered alignment stack, identifying measurement and control characteristics at each layer and how different layers are formally interoperable. We argue that such analysis is also key to the assurances that will be needed by governments and regulators in order to see AI technologies sustainably benefit the community. Our position is that doing so will bridge the well-established and empirically validated methods of optimal control with practical deployment considerations to create a more comprehensive alignment framework, enhancing how we approach safety and reliability for advanced AI systems.
Problem

Research questions and friction points this paper is trying to address.

Applying formal optimal control theory to AI alignment research
Developing interoperable alignment and control protocols for AI systems
Creating a hierarchical alignment control stack for AI safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formal optimal control theory for AI alignment
Hierarchical Alignment Control Stack framework
Interoperable measurement and control layers
🔎 Similar Papers
No similar papers found.