Offline Learning of Nash Stable Coalition Structures with Possibly Overlapping Coalitions

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of efficiently learning Nash-stable coalition structures from offline data in realistic settings where preference information is incomplete and coalitions may overlap. Within a novel offline learning framework, the work infers preferences of self-interested agents using only agent-level or coalition-level utility feedback from fixed interaction logs, and constructs approximately Nash-stable overlapping coalition partitions. The authors establish a preference identifiability condition based on informational sufficiency, design a sample-efficient coalition formation algorithm, and provide theoretical guarantees on near-optimal sample complexity. Experimental results demonstrate that the proposed method rapidly converges to stable solutions with low approximation error across diverse settings.

Technology Category

Application Category

📝 Abstract
Coalition formation concerns strategic collaborations of selfish agents that form coalitions based on their preferences. It is often assumed that coalitions are disjoint and preferences are fully known, which may not hold in practice. In this paper, we thus present a new model of coalition formation with possibly overlapping coalitions under partial information, where selfish agents may be part of multiple coalitions simultaneously and their full preferences are initially unknown. Instead, information about past interactions and associated utility feedback is stored in a fixed offline dataset, and we aim to efficiently infer the agents'preferences from this dataset. We analyze the impact of diverse dataset information constraints by studying two types of utility feedback that can be stored in the dataset: agent- and coalition-level utility feedback. For both feedback models, we identify assumptions under which the dataset covers sufficient information for an offline learning algorithm to infer preferences and use them to recover a partition that is (approximately) Nash stable, in which no agent can improve her utility by unilaterally deviating. Our additional goal is devising algorithms with low sample complexity, requiring only a small dataset to obtain a desired approximation to Nash stability. Under agent-level feedback, we provide a sample-efficient algorithm proven to obtain an approximately Nash stable partition under a sufficient and necessary assumption on the information covered by the dataset. However, under coalition-level feedback, we show that only under a stricter assumption is sufficient for sample-efficient learning. Still, in multiple cases, our algorithms'sample complexity bounds have optimality guarantees up to logarithmic factors. Finally, extensive experiments show that our algorithm converges to a low approximation level to Nash stability across diverse settings.
Problem

Research questions and friction points this paper is trying to address.

coalition formation
overlapping coalitions
offline learning
Nash stability
partial information
Innovation

Methods, ideas, or system contributions that make the work stand out.

offline learning
overlapping coalitions
Nash stability
sample complexity
coalition formation
🔎 Similar Papers
No similar papers found.