🤖 AI Summary
Current AI alignment research lacks a formal optimal control-theoretic foundation and neglects interoperability among alignment protocols, resulting in inadequate theoretical characterization of controllability and safety boundaries for frontier models and agentic AI systems. This paper introduces formal optimal control theory into the AI alignment framework for the first time, constructing a hierarchical, interoperable alignment control stack spanning physical, algorithmic, system, and socio-technical layers. The stack provides verifiable and scalable control interfaces that enable cross-protocol coordinated governance. Through layered modeling, mechanism interpretability analysis, and formal interoperability specifications, we establish a unified framework for controllability analysis. This significantly advances the theoretical understanding of safety boundaries and risk propagation pathways in complex AI systems, thereby furnishing a rigorous foundation for regulatory assessment and technical governance.
📝 Abstract
This position paper argues that formal optimal control theory should be central to AI alignment research, offering a distinct perspective from prevailing AI safety and security approaches. While recent work in AI safety and mechanistic interpretability has advanced formal methods for alignment, they often fall short of the generalisation required of control frameworks for other technologies. There is also a lack of research into how to render different alignment/control protocols interoperable. We argue that by recasting alignment through principles of formal optimal control and framing alignment in terms of hierarchical stack from physical to socio-technical layers according to which controls may be applied we can develop a better understanding of the potential and limitations for controlling frontier models and agentic AI systems. To this end, we introduce an Alignment Control Stack which sets out a hierarchical layered alignment stack, identifying measurement and control characteristics at each layer and how different layers are formally interoperable. We argue that such analysis is also key to the assurances that will be needed by governments and regulators in order to see AI technologies sustainably benefit the community. Our position is that doing so will bridge the well-established and empirically validated methods of optimal control with practical deployment considerations to create a more comprehensive alignment framework, enhancing how we approach safety and reliability for advanced AI systems.