Abstract
Rideshare platforms exert significant control over workers through algorithmic systems that can result in financial, emotional, and physical harm. What steps can platforms, designers, and practitioners take to mitigate these negative impacts and meet worker needs? In this paper, we identify transparency-related harms, mitigation strategies, and worker needs while validating and contextualizing our findings within the broader worker community. We use a novel mixed-methods study combining an LLM-based analysis of over 1 million comments posted to online platform worker communities with semi-structured interviews with workers. Our findings expose a transparency gap between existing platform designs and the information drivers need, particularly concerning promotions, fares, routes, and task allocation. Our analysis suggests that rideshare workers need key pieces of information, which we refer to as indicators, to make informed work decisions. These indicators include details about rides, driver statistics, algorithmic implementation details, and platform policy information. We argue that instead of relying on platforms to include such information in their designs, new regulations requiring platforms to publish public transparency reports may be a more effective solution to improve worker well-being. We offer recommendations for implementing such a policy.
Original language | English (US) |
---|---|
Article number | CSCW161 |
Journal | Proceedings of the ACM on Human-Computer Interaction |
Volume | 9 |
Issue number | 2 |
DOIs | |
State | Published - May 2 2025 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Social Sciences (miscellaneous)
- Human-Computer Interaction
- Computer Networks and Communications
Keywords
- AI Transparency
- LLMs
- Labor
- Policy
- Rideshare Platforms