AI Alignment: A Comprehensive Survey

📅 2023-10-30
🏛️ arXiv.org
📈 Citations: 166
Influential: 4
📄 PDF
🤖 AI Summary
AI alignment seeks to mitigate risks arising from potential misalignment between advanced AI systems and human intentions and values. Method: This paper introduces the RICE framework—comprising Robustness, Interpretability, Controllability, and Ethics—as a unified four-dimensional lens. It systematically integrates technical alignment methods, evaluation and verification techniques, and governance practices, establishing a dual-path paradigm: “forward alignment” (during training) and “backward alignment” (post-deployment assessment and governance). Contribution/Results: The work formalizes RICE as a coherent analytical taxonomy; introduces the forward/backward dichotomy to clarify the alignment lifecycle; and constructs a full-stack knowledge system spanning learning algorithms (e.g., learning from feedback, distributional shift adaptation), verifiable evaluation (trustworthy metrics, provable alignment), and multi-layered governance mechanisms. It delivers the most comprehensive survey of AI alignment to date and sustains community engagement via the open-source platform alignmentsurvey.com, which hosts tutorials, a curated paper repository, and practical implementation guidelines.
📝 Abstract
AI alignment aims to make AI systems behave in line with human intentions and values. As AI systems grow more capable, so do risks from misalignment. To provide a comprehensive and up-to-date overview of the alignment field, in this survey, we delve into the core concepts, methodology, and practice of alignment. First, we identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality (RICE). Guided by these four principles, we outline the landscape of current alignment research and decompose them into two key components: forward alignment and backward alignment. The former aims to make AI systems aligned via alignment training, while the latter aims to gain evidence about the systems' alignment and govern them appropriately to avoid exacerbating misalignment risks. On forward alignment, we discuss techniques for learning from feedback and learning under distribution shift. On backward alignment, we discuss assurance techniques and governance practices. We also release and continually update the website (www.alignmentsurvey.com) which features tutorials, collections of papers, blog posts, and other resources.
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI systems align with human intentions and values
Addressing risks from misalignment as AI capabilities advance
Exploring forward and backward alignment techniques for AI safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four principles guide AI alignment: Robustness, Interpretability, Controllability, Ethicality
Forward alignment trains AI systems using feedback techniques
Backward alignment uses assurance techniques and governance practices
🔎 Similar Papers
No similar papers found.
J
Jiaming Ji
Peking University
T
Tianyi Qiu
Peking University
B
Boyuan Chen
Peking University
Borong Zhang
Borong Zhang
University of Macau
Reinforcement learningRobotics
Hantao Lou
Hantao Lou
Peking University
AI AlignmentAI SafetyInterpretabilityTrustworthy AI
Kaile Wang
Kaile Wang
Peking University
Yawen Duan
Yawen Duan
University of Cambridge
Deep LearningArtificial IntelligenceAI Safety
Zhonghao He
Zhonghao He
University of Cambridge
AI AlignmentHuman-AI SystemsMachine EthicsInterpretabilityTruth-seeking AI
J
Jiayi Zhou
Peking University
Zhaowei Zhang
Zhaowei Zhang
Peking University
AI GovernanceAI AlignmentGame TheoryHuman-AI Collaboration
Fanzhi Zeng
Fanzhi Zeng
UT Austin
Reinforcement LearningAI Alignment
J
Juntao Dai
Peking University
Xuehai Pan
Xuehai Pan
Peking University
Multi-Agent LearningReinforcement LearningAI AlignmentAI Agents
K
Kwan Yee Ng
University of Southern California
A
Aidan O'Gara
University of Southern California
H
Hua Xu
Peking University
B
Brian Tse
J
Jie Fu
S
S. McAleer
Carnegie Mellon University
Y
Yaodong Yang
Peking University
Y
Yizhou Wang
Peking University
S
Song-Chun Zhu
Peking University
Y
Yike Guo
Hong Kong University of Science and Technology
W
Wen Gao
Peking University