Behavioural feasible set: Value alignment constraints on AI decision support

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses how organizations adopting commercial AI decision-support systems often passively accept vendors’ embedded and non-negotiable value judgments, thereby constraining their own decision flexibility. The paper introduces the concept of the “behaviorally feasible set” to formally characterize the range of recommendations an AI system can generate under value-alignment constraints and identifies critical conditions under which organizational needs exceed the system’s adaptive capacity. Through controlled experiments comparing binary decisions and multi-stakeholder preference rankings, the research demonstrates that value alignment substantially shrinks the behaviorally feasible set, diminishing the system’s responsiveness to legitimate contextual variation. Commercial models exhibit heightened rigidity, and the alignment process systematically shifts—rather than neutralizes—stakeholder priorities, revealing that value alignment functions as a structural mechanism embedding vendor values and narrowing organizational negotiation space.

Technology Category

Application Category

📝 Abstract
When organisations adopt commercial AI systems for decision support, they inherit value judgements embedded by vendors that are neither transparent nor renegotiable. The governance puzzle is not whether AI can support decisions but which recommendations the system can actually produce given how its vendor has configured it. I formalise this as a behavioural feasible set, the range of recommendations reachable under vendor-imposed alignment constraints, and characterise diagnostic thresholds for when organisational requirements exceed the system's flexibility. In scenario-based experiments using binary decision scenarios and multi-stakeholder ranking tasks, I show that alignment materially compresses this set. Comparing pre- and post-alignment variants of an open-weight model isolates the mechanism: alignment makes the system substantially less able to shift its recommendation even under legitimate contextual pressure. Leading commercial models exhibit comparable or greater rigidity. In multi-stakeholder tasks, alignment shifts implied stakeholder priorities rather than neutralising them, meaning organisations adopt embedded value orientations set upstream by the vendor. Organisations thus face a governance problem that better prompting cannot resolve: selecting a vendor partially determines which trade-offs remain negotiable and which stakeholder priorities are structurally embedded.
Problem

Research questions and friction points this paper is trying to address.

value alignment
AI governance
behavioral feasible set
vendor constraints
stakeholder priorities
Innovation

Methods, ideas, or system contributions that make the work stand out.

behavioural feasible set
value alignment
AI governance
vendor constraints
stakeholder priorities
🔎 Similar Papers
No similar papers found.