🤖 AI Summary
Algorithmic opacity in ride-hailing platforms inflicts multidimensional harm on drivers—financial, emotional, and physical. This study identifies four critical transparency gaps—promotional logic, fare calculation, route optimization, and dispatch algorithms—through LLM-powered text mining of over one million online community reviews and in-depth interviews with drivers. It introduces the first operational “Transparency Index” framework, systematically defining essential informational dimensions required by drivers: algorithmic implementation details, dispatch logic, and earnings composition. Moving beyond voluntary platform disclosures, the study advocates for mandatory public transparency reporting and proposes actionable regulatory policies. Key contributions include: (1) establishing an empirically grounded, operationalizable metric system for algorithmic transparency; (2) demonstrating the efficacy of mixed-methods approaches—combining large-scale computational analysis with qualitative inquiry—in platform labor research; and (3) advancing both theoretical understanding and practical governance mechanisms for digital labor.
📝 Abstract
Rideshare platforms exert significant control over workers through algorithmic systems that can result in financial, emotional, and physical harm. What steps can platforms, designers, and practitioners take to mitigate these negative impacts and meet worker needs? In this paper, we identify transparency-related harms, mitigation strategies, and worker needs while validating and contextualizing our findings within the broader worker community. We use a novel mixed-methods study combining an LLM-based analysis of over 1 million comments posted to online platform worker communities with semi-structured interviews with workers. Our findings expose a transparency gap between existing platform designs and the information drivers need, particularly concerning promotions, fares, routes, and task allocation. Our analysis suggests that rideshare workers need key pieces of information, which we refer to as indicators, to make informed work decisions. These indicators include details about rides, driver statistics, algorithmic implementation details, and platform policy information. We argue that instead of relying on platforms to include such information in their designs, new regulations requiring platforms to publish public transparency reports may be a more effective solution to improve worker well-being. We offer recommendations for implementing such a policy.