🤖 AI Summary
Online market rating systems, while effective at identifying high-quality products, induce excessive outcome variance among producers of similar quality—particularly hindering early-stage development for new entrants. This paper formalizes, for the first time, the fundamental trade-off between system efficiency (i.e., accurate identification of high-quality items) and producer-level fairness (i.e., minimizing outcome disparities among ex ante homogeneous producers). We propose a Bayesian updating–based prior-weighted scoring framework, wherein tunable prior strength dynamically modulates the weight assigned to early ratings, thereby transforming the fairness–efficiency trade-off into an explicit, controllable design parameter. We provide theoretical guarantees showing that prior strength precisely governs this trade-off. Calibrated simulations on data from 19 real-world platforms demonstrate that our mechanism significantly improves fairness for new producers while preserving robust learning efficiency.
📝 Abstract
Online marketplaces use rating systems to promote the discovery of high-quality products. However, these systems also lead to high variance in producers' economic outcomes: a new producer who sells high-quality items, may unluckily receive a low rating early, severely impacting their future popularity. We investigate the design of rating systems that balance the goals of identifying high-quality products (``efficiency'') and minimizing the variance in outcomes of producers of similar quality (individual ``producer fairness''). We show that there is a trade-off between these two goals: rating systems that promote efficiency are necessarily less individually fair to producers. We introduce prior-weighted rating systems as an approach to managing this trade-off. Informally, the system we propose sets a system-wide prior for the quality of an incoming product; subsequently, the system updates that prior to a posterior for each product's quality based on user-generated ratings over time. We show theoretically that in markets where products accrue reviews at an equal rate, the strength of the rating system's prior determines the operating point on the identified trade-off: the stronger the prior, the more the marketplace discounts early ratings data (increasing individual fairness), but the slower the platform is in learning about true item quality (so efficiency suffers). We further analyze this trade-off in a responsive market where customers make decisions based on historical ratings. Through calibrated simulations in 19 different real-world datasets sourced from large online platforms, we show that the choice of prior strength mediates the same efficiency-consistency trade-off in this setting. Overall, we demonstrate that by tuning the prior as a design choice in a prior-weighted rating system, platforms can be intentional about the balance between efficiency and producer fairness.