When the Universe is Too Big: Bounding Consideration Probabilities for Plackett-Luce Rankings

📅 2024-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The Plackett-Luce model overlooks cognitive constraints and fails to capture “consider-then-choose” behavior under large option sets. Method: We propose a “consider-then-choose” framework centered on the identifiability of unobserved item consideration probabilities. Assuming known utilities but unknown consideration probabilities, we develop a convex optimization–based constrained propagation algorithm that systematically tightens bounds on consideration probabilities. Contribution/Results: We establish the first theoretical identifiability results for consideration probabilities in top-k rankings—characterizing both their relative ordering and absolute upper/lower bounds. Empirical validation on psychological experiment data demonstrates joint estimation of utility parameters and consideration probability bounds, confirming both the theoretical guarantees and practical efficacy of the derived bounds.

Technology Category

Application Category

📝 Abstract
The widely used Plackett-Luce ranking model assumes that individuals rank items by making repeated choices from a universe of items. But in many cases the universe is too big for people to plausibly consider all options. In the choice literature, this issue has been addressed by supposing that individuals first sample a small consideration set and then choose among the considered items. However, inferring unobserved consideration sets (or item consideration probabilities) in this"consider then choose"setting poses significant challenges, because even simple models of consideration with strong independence assumptions are not identifiable, even if item utilities are known. We apply the consider-then-choose framework to top-$k$ rankings, where we assume rankings are constructed according to a Plackett-Luce model after sampling a consideration set. While item consideration probabilities remain non-identified in this setting, we prove that we can infer bounds on the relative values of consideration probabilities. Additionally, given a condition on the expected consideration set size and known item utilities, we derive absolute upper and lower bounds on item consideration probabilities. We also provide algorithms to tighten those bounds on consideration probabilities by propagating inferred constraints. Thus, we show that we can learn useful information about consideration probabilities despite not being able to identify them precisely. We demonstrate our methods on a ranking dataset from a psychology experiment with two different ranking tasks (one with fixed consideration sets and one with unknown consideration sets). This combination of data allows us to estimate utilities and then learn about unknown consideration probabilities using our bounds.
Problem

Research questions and friction points this paper is trying to address.

Bounding consideration probabilities in Plackett-Luce rankings
Addressing non-identifiability of consideration sets in choice models
Deriving constraints on item consideration probabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bounds on relative consideration probabilities
Absolute bounds with set size condition
Algorithm to tighten probability bounds
🔎 Similar Papers
No similar papers found.
B
Ben Aoki-Sherwood
Johns Hopkins Applied Physics Lab
C
Catherine Bregou
Carleton College
David Liben-Nowell
David Liben-Nowell
Carleton College
Kiran Tomlinson
Kiran Tomlinson
Microsoft Research
machine learningnetwork sciencecomputational biology
T
Thomas Zeng
University of Wisconsin