Explainable Distributed Constraint Optimization Problems

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distributed Constraint Optimization Problems (DCOPs) suffer from poor solution interpretability, hindering real-world deployment. Method: This paper introduces Explainable DCOP (X-DCOP), the first DCOP framework natively integrating interpretability. It jointly models optimal solutions and contrastive explanations, formally defining conditions for explanation validity. A distributed solving framework is developed, along with multiple optimization variants that trade off explanation conciseness against computational overhead. Contribution/Results: Theoretically, X-DCOP establishes foundational guarantees for explanation validity. Empirically, it scales to large-scale DCOP instances. Human-subject experiments confirm users prefer concise explanations and demonstrate that the variants enable flexible, on-demand balancing of explanation quality and computational efficiency.

Technology Category

Application Category

📝 Abstract
The Distributed Constraint Optimization Problem (DCOP) formulation is a powerful tool to model cooperative multi-agent problems that need to be solved distributively. A core assumption of existing approaches is that DCOP solutions can be easily understood, accepted, and adopted, which may not hold, as evidenced by the large body of literature on Explainable AI. In this paper, we propose the Explainable DCOP (X-DCOP) model, which extends a DCOP to include its solution and a contrastive query for that solution. We formally define some key properties that contrastive explanations must satisfy for them to be considered as valid solutions to X-DCOPs as well as theoretical results on the existence of such valid explanations. To solve X-DCOPs, we propose a distributed framework as well as several optimizations and suboptimal variants to find valid explanations. We also include a human user study that showed that users, not surprisingly, prefer shorter explanations over longer ones. Our empirical evaluations showed that our approach can scale to large problems, and the different variants provide different options for trading off explanation lengths for smaller runtimes. Thus, our model and algorithmic contributions extend the state of the art by reducing the barrier for users to understand DCOP solutions, facilitating their adoption in more real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Extends DCOP for explainable solutions
Defines properties for valid contrastive explanations
Proposes scalable distributed framework for X-DCOPs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable DCOP model
Contrastive query solutions
Distributed framework optimizations
🔎 Similar Papers
No similar papers found.
B
Ben Rachmut
Department of Computer Science and Engineering, Washington University in St. Louis
Stylianos Loukas Vasileiou
Stylianos Loukas Vasileiou
New Mexico State University
Artificial IntelligenceHuman-Aware AIExplainable Decision-MakingKRRAutomated Planning
N
Nimrod Meir Weinstein
Department of Industrial Engineering and Management, Ben-Gurion University of the Negev
R
R. Zivan
Department of Industrial Engineering and Management, Ben-Gurion University of the Negev
W
William Yeoh
Department of Computer Science and Engineering, Washington University in St. Louis