🤖 AI Summary
This paper addresses governance risks arising from misalignment between AI systems and societal institutions’ or individuals’ values. To tackle this, it proposes a “full-stack alignment” framework centered on a novel “thick value model” that distinguishes enduring values from context-sensitive preferences, thereby enabling normative reasoning, modeling of collective goods, and cross-level value embedding. Methodologically, the framework integrates value-sensitive decision architectures, socially embedded agent design, value-aware institutional and economic mechanisms, and is empirically validated across five domains: AI governance, normative agent construction, win-win negotiation, meaning-preserving incentive design, and democratic oversight institutions. Results demonstrate that the framework systematically enhances value consistency between AI development and societal well-being. It offers a theoretically grounded yet practically viable alignment paradigm for trustworthy AI—bridging normative theory, institutional design, and technical implementation.
📝 Abstract
Beneficial societal outcomes cannot be guaranteed by aligning individual AI systems with the intentions of their operators or users. Even an AI system that is perfectly aligned to the intentions of its operating organization can lead to bad outcomes if the goals of that organization are misaligned with those of other institutions and individuals. For this reason, we need full-stack alignment, the concurrent alignment of AI systems and the institutions that shape them with what people value. This can be done without imposing a particular vision of individual or collective flourishing. We argue that current approaches for representing values, such as utility functions, preference orderings, or unstructured text, struggle to address these and other issues effectively. They struggle to distinguish values from other signals, to support principled normative reasoning, and to model collective goods. We propose thick models of value will be needed. These structure the way values and norms are represented, enabling systems to distinguish enduring values from fleeting preferences, to model the social embedding of individual choices, and to reason normatively, applying values in new domains. We demonstrate this approach in five areas: AI value stewardship, normatively competent agents, win-win negotiation systems, meaning-preserving economic mechanisms, and democratic regulatory institutions.