🤖 AI Summary
Digital twins of complex socio-technical systems often yield algorithmically optimal decisions that violate human fairness intuitions due to misalignment between algorithmic rationality and bounded human rationality. To address this, we propose a context-aware preference learning framework that, for the first time, formalizes human pairwise decision preferences as an optimizable fairness objective. Specifically, we employ a Siamese neural network to learn context-dependent convex quadratic cost functions, enabling inverse mapping from implicit fairness judgments to explicit, differentiable optimization targets. Our method unifies preference learning, inverse reinforcement learning, and convex optimization—ensuring both interpretability and computational efficiency. Evaluated on COVID-19 medical resource allocation, it significantly improves alignment with human fairness assessments (+32.7% Kendall τ) while maintaining millisecond-scale inference speed.
📝 Abstract
Digital Twins (DTs) are increasingly used as autonomous decision-makers in complex socio-technical systems. Their mathematically optimal decisions often diverge from human expectations, exposing a persistent gap between algorithmic and bounded human rationality. This work addresses this gap by proposing a framework that operationalizes fairness as a learnable objective within optimization-based Digital Twins. We introduce a preference-driven learning pipeline that infers latent fairness objectives directly from human pairwise preferences over feasible decisions. A novel Siamese neural network is developed to generate convex quadratic cost functions conditioned on contextual information. The resulting surrogate objectives align optimization outcomes with human-perceived fairness while maintaining computational efficiency. The approach is demonstrated on a COVID-19 hospital resource allocation scenario. This study provides an actionable path toward embedding human-centered fairness in the design of autonomous decision-making systems.