π€ AI Summary
This study presents the first large-scale empirical measurement of value priorities (L4), evidence preferences (L3), and source trustworthiness (L2) in AI systems operating within structured moral dilemmas. Grounded in the Authority Stack theoretical framework, the research employs the PRISM forced-choice benchmark to collect 366,120 deterministic responses across eight leading models, spanning seven professional domains and varying levels of severity and temporal scales. The work introduces two novel metricsβthe Paired Consistency Score (PCS: 57.4%β69.2%) and Test-Retest Reliability (TRR: 91.7%β98.6%)βto quantify model consistency and stability. Key findings reveal that in defensive scenarios, the value of Security overwhelmingly dominates Universalism (win rates: 95.1%β99.8%), and institutional sources are consistently trusted across contexts.
π Abstract
What values, evidence preferences, and source trust hierarchies do AI systems actually exhibit when facing structured dilemmas? We present the first large-scale empirical mapping of AI decision-making across all three layers of the Authority Stack framework (S. Lee, 2026a): value priorities (L4), evidence-type preferences (L3), and source trust hierarchies (L2). Using the PRISM benchmark -- a forced-choice instrument of 14,175 unique scenarios per layer, spanning 7 professional domains, 3 severity levels, 3 decision timeframes, and 5 scenario variants -- we evaluated 8 major AI models at temperature 0, yielding 366,120 total responses. Key findings include: (1) a symmetric 4:4 split between Universalism-first and Security-first models at L4; (2) dramatic defense-domain value restructuring where Security surges to near-ceiling win-rates (95.1%-99.8%) in 6 of 8 models; (3) divergent evidence hierarchies at L3, with some models favoring empirical-scientific evidence while others prefer pattern-based or experiential evidence; (4) broad convergence on institutional source trust at L2; and (5) Paired Consistency Scores (PCS) ranging from 57.4% to 69.2%, revealing substantial framing sensitivity across scenario variants. Test-Retest Reliability (TRR) ranges from 91.7% to 98.6%, indicating that value instability stems primarily from variant sensitivity rather than stochastic noise. These findings demonstrate that AI models possess measurable -- if sometimes unstable -- Authority Stacks with consequential implications for deployment across professional domains.