🤖 AI Summary
This work addresses a longstanding gap in AI alignment research—the underutilization of law as a critical source of normative and technical constraints. It introduces a novel “legal alignment” paradigm that systematically integrates legal rules, interpretive methodologies, and institutional structures into AI development. The framework advances three core directions: formal modeling of legal norms, reasoning mechanisms grounded in legal interpretation, and compliance-oriented evaluation and governance architectures. By deeply embedding legal knowledge systems into the foundations of AI alignment, this study provides both theoretical grounding and technical pathways for building lawful, trustworthy, and collaboratively capable AI systems. Furthermore, it fosters interdisciplinary synergy between legal scholarship and artificial intelligence, enabling the institutional implementation of legal alignment in real-world applications.
📝 Abstract
Alignment of artificial intelligence (AI) encompasses the normative problem of specifying how AI systems should act and the technical problem of ensuring AI systems comply with those specifications. To date, AI alignment has generally overlooked an important source of knowledge and practice for grappling with these problems: law. In this paper, we aim to fill this gap by exploring how legal rules, principles, and methods can be leveraged to address problems of alignment and inform the design of AI systems that operate safely and ethically. This emerging field -- legal alignment -- focuses on three research directions: (1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems. These research directions present new conceptual, empirical, and institutional questions, which include examining the specific set of laws that particular AI systems should follow, creating evaluations to assess their legal compliance in real-world settings, and developing governance frameworks to support the implementation of legal alignment in practice. Tackling these questions requires expertise across law, computer science, and other disciplines, offering these communities the opportunity to collaborate in designing AI for the better.