🤖 AI Summary
This paper addresses the lack of explainable support for ethical decision-making in consumer coffee purchasing by proposing a gamified, explainable AI system. Methodologically, it introduces a Kantian–utilitarian dual-engine reasoning framework: the Kantian engine detects violations of deontological principles (e.g., child labor, deforestation), while the utilitarian engine computes multi-criteria weighted scores over normalized attributes (e.g., price, carbon footprint, water usage). A meta-interpreter—bounded by a regret threshold of 0.2—dynamically identifies ethical conflicts and recommends near-optimal utility solutions under moral compliance constraints. The system integrates symbolic reasoning, normalized attribute modeling, certification-to-attribute mapping, and auditable policy trajectories to enable interactive, transparent explanations. Empirical evaluation demonstrates significant improvements in both ethical decision quality and explanation credibility. This work establishes a novel paradigm for sustainable-consumption AI that jointly satisfies normative rigor and practical applicability.
📝 Abstract
We present a gamified explainable AI (XAI) system for ethically aware consumer decision-making in the coffee domain. Each session comprises six rounds with three options per round. Two symbolic engines provide real-time reasons: a Kantian module flags rule violations (e.g., child labor, deforestation risk without shade certification, opaque supply chains, unsafe decaf), and a utilitarian module scores options via multi-criteria aggregation over normalized attributes (price, carbon, water, transparency, farmer income share, taste/freshness, packaging, convenience). A meta-explainer with a regret bound (0.2) highlights Kantian--utilitarian (mis)alignment and switches to a deontically clean, near-parity option when welfare loss is small. We release a structured configuration (attribute schema, certification map, weights, rule set), a policy trace for auditability, and an interactive UI.