Co-Constructing Alignment: A Participatory Approach to Situate AI Values

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reconceptualizes AI alignment as a situated, ongoing, and co-constructed practice between humans and models, shifting focus from model-centric approaches to the active role of users in recognizing and responding to value misalignments during interaction. Through participatory workshops, misalignment diaries, and generative design activities, the research investigates how users understand and engage in alignment processes when using large language models as research assistants. Findings reveal that value misalignments often manifest as unexpected responses or disruptions in task execution and social interaction, prompting users to develop diverse strategies—ranging from prompt refinement and model behavior interpretation to deliberate disengagement. By foregrounding users as agentic cognitive participants, this work challenges dominant model-centered paradigms and offers a novel pathway toward human-centered alignment mechanisms.

Technology Category

Application Category

📝 Abstract
As AI systems become embedded in everyday practice, value misalignment has emerged as a pressing concern. Yet, dominant alignment approaches remain model centric, treating users as passive recipients of prespecified values rather than as epistemic agents who encounter and respond to misalignment during interactions. Drawing on situated perspectives, we frame alignment as an interactional practice co-constructed during human AI interaction. We investigate how users understand and wish to contribute to this process through a participatory workshop that combines misalignment diaries with generative design activities. We surface how misalignments materialise in practice and how users envision acting on them, grounded in the context of researchers using Large Language Models as research assistants. Our findings show that misalignments are experienced less as abstract ethical violations than as unexpected responses, and task or social breakdowns. Participants articulated roles ranging from adjusting and interpreting model behaviour to deliberate non-engagement as an alignment strategy. We conclude with implications for designing systems that support alignment as an ongoing, situated, and shared practice.
Problem

Research questions and friction points this paper is trying to address.

value misalignment
human-AI interaction
participatory design
situated alignment
AI ethics
Innovation

Methods, ideas, or system contributions that make the work stand out.

participatory alignment
situated AI
value misalignment
human-AI interaction
co-construction
🔎 Similar Papers
No similar papers found.
A
Anne Arzberger
Delft University of Technology (CS), Netherlands
Enrico Liscio
Enrico Liscio
Postdoctoral researcher, TU Delft
NLPDeep LearningMoralityHuman ValuesEthics
M
M. Lupetti
Politecnico di Torino (Design), Italy
Í
Íñigo Martinez de Rituerto de Troya
Delft University of Technology (TPM), Netherlands
Jie Yang
Jie Yang
Assistant Professor, Delft University of Technology
human language technologieshuman-centered AIcrowd computingrecommender systems