Human-AI Interaction Alignment: Designing, Evaluating, and Evolving Value-Centered AI For Reciprocal Human-AI Futures

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
As generative AI becomes deeply embedded in society, the prevailing unidirectional paradigm—where AI is aligned to human values—must evolve toward bidirectional human-AI alignment: a dynamic, co-adaptive, and co-evolutionary process wherein humans and AI mutually adjust through interaction. Method: This study introduces the first systematic theoretical framework for bidirectional alignment, emphasizing the simultaneity of value internalization and capability evolution, thereby transcending the limitations of instruction-based alignment. It integrates human-computer interaction (HCI), value-sensitive design, social impact assessment, dynamic contextual modeling, and interdisciplinary collaboration. Contribution/Results: We establish the first cross-domain research agenda for bidirectional alignment; propose a scalable interaction-centered alignment methodology; and develop a suite of social impact assessment tools alongside a practical guide for dynamic alignment implementation. These contributions provide a novel paradigm and actionable framework for AI ethics governance and human-centered intelligent system design.

Technology Category

Application Category

📝 Abstract
The rapid integration of generative AI into everyday life underscores the need to move beyond unidirectional alignment models that only adapt AI to human values. This workshop focuses on bidirectional human-AI alignment, a dynamic, reciprocal process where humans and AI co-adapt through interaction, evaluation, and value-centered design. Building on our past CHI 2025 BiAlign SIG and ICLR 2025 Workshop, this workshop will bring together interdisciplinary researchers from HCI, AI, social sciences and more domains to advance value-centered AI and reciprocal human-AI collaboration. We focus on embedding human and societal values into alignment research, emphasizing not only steering AI toward human values but also enabling humans to critically engage with and evolve alongside AI systems. Through talks, interdisciplinary discussions, and collaborative activities, participants will explore methods for interactive alignment, frameworks for societal impact evaluation, and strategies for alignment in dynamic contexts. This workshop aims to bridge the disciplines' gaps and establish a shared agenda for responsible, reciprocal human-AI futures.
Problem

Research questions and friction points this paper is trying to address.

Develop bidirectional human-AI alignment through co-adaptation
Embed human and societal values into AI alignment research
Establish interdisciplinary frameworks for reciprocal human-AI collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional human-AI alignment through co-adaptation
Value-centered design embedding human and societal values
Interactive methods and frameworks for dynamic alignment evaluation