In-context Inverse Optimality for Fair Digital Twins: A Preference-based approach

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Digital twins of complex socio-technical systems often yield algorithmically optimal decisions that violate human fairness intuitions due to misalignment between algorithmic rationality and bounded human rationality. To address this, we propose a context-aware preference learning framework that, for the first time, formalizes human pairwise decision preferences as an optimizable fairness objective. Specifically, we employ a Siamese neural network to learn context-dependent convex quadratic cost functions, enabling inverse mapping from implicit fairness judgments to explicit, differentiable optimization targets. Our method unifies preference learning, inverse reinforcement learning, and convex optimization—ensuring both interpretability and computational efficiency. Evaluated on COVID-19 medical resource allocation, it significantly improves alignment with human fairness assessments (+32.7% Kendall τ) while maintaining millisecond-scale inference speed.

Technology Category

Application Category

📝 Abstract
Digital Twins (DTs) are increasingly used as autonomous decision-makers in complex socio-technical systems. Their mathematically optimal decisions often diverge from human expectations, exposing a persistent gap between algorithmic and bounded human rationality. This work addresses this gap by proposing a framework that operationalizes fairness as a learnable objective within optimization-based Digital Twins. We introduce a preference-driven learning pipeline that infers latent fairness objectives directly from human pairwise preferences over feasible decisions. A novel Siamese neural network is developed to generate convex quadratic cost functions conditioned on contextual information. The resulting surrogate objectives align optimization outcomes with human-perceived fairness while maintaining computational efficiency. The approach is demonstrated on a COVID-19 hospital resource allocation scenario. This study provides an actionable path toward embedding human-centered fairness in the design of autonomous decision-making systems.
Problem

Research questions and friction points this paper is trying to address.

Bridging algorithmic and human rationality gaps in Digital Twins
Learning fairness objectives from human preferences for optimization
Embedding human-centered fairness in autonomous decision-making systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns fairness objectives from human pairwise preferences
Uses Siamese neural network for convex quadratic cost functions
Aligns optimization outcomes with human-perceived fairness efficiently
🔎 Similar Papers
No similar papers found.