Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems

📅 2024-02-26
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses value preference estimation bias arising from conflicts between citizens’ behavioral choices and stated motivations in hybrid participation systems. Methodologically, it introduces a motivation-prioritized value inference paradigm: (1) formalizing the philosophical principle “values as considered consequences” into a preference estimation criterion; (2) developing a unified modeling framework that jointly integrates choice behavior and motivational text; and (3) designing a conflict-driven human–AI disambiguation mechanism leveraging NLP and active learning. Experiments on large-scale energy transition survey data demonstrate that explicitly modeling choice–motivation inconsistency significantly improves individual value preference estimation accuracy. Although the disambiguation performance does not uniformly surpass all baselines, the approach establishes a novel, interpretable, and interactive pathway for value alignment. It provides both theoretical foundations and practical tools for participatory policy design.

Technology Category

Application Category

📝 Abstract
Understanding citizens' values in participatory systems is crucial for citizen-centric policy-making. We envision a hybrid participatory system where participants make choices and provide motivations for those choices, and AI agents estimate their value preferences by interacting with them. We focus on situations where a conflict is detected between participants' choices and motivations, and propose methods for estimating value preferences while addressing detected inconsistencies by interacting with the participants. We operationalize the philosophical stance that"valuing is deliberatively consequential."That is, if a participant's choice is based on a deliberation of value preferences, the value preferences can be observed in the motivation the participant provides for the choice. Thus, we propose and compare value preferences estimation methods that prioritize the values estimated from motivations over the values estimated from choices alone. Then, we introduce a disambiguation strategy that combines Natural Language Processing and Active Learning to address the detected inconsistencies between choices and motivations. We evaluate the proposed methods on a dataset of a large-scale survey on energy transition. The results show that explicitly addressing inconsistencies between choices and motivations improves the estimation of an individual's value preferences. The disambiguation strategy does not show substantial improvements when compared to similar baselines--however, we discuss how the novelty of the approach can open new research avenues and propose improvements to address the current limitations.
Problem

Research questions and friction points this paper is trying to address.

Estimate citizen value preferences
Resolve choice-motivation inconsistencies
Enhance participatory policy-making systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI estimates value preferences
NLP and Active Learning disambiguation
Prioritize motivations over choices
🔎 Similar Papers
No similar papers found.
Enrico Liscio
Enrico Liscio
Postdoctoral researcher, TU Delft
NLPDeep LearningMoralityHuman ValuesEthics
L
L. C. Siebert
Delft University of Technology, the Netherlands
C
C. Jonker
Delft University of Technology, the Netherlands and Leiden University, The Netherlands
P
P. Murukannaiah
Delft University of Technology, the Netherlands