TY - JOUR
T1 - Causally estimating the effect of YouTube's recommender system using counterfactual bots
AU - Hosseinmardi, Homa
AU - Ghasemian, Amir
AU - Rivera-Lanas, Miguel
AU - Ribeiro, Manoel Horta
AU - West, Robert
AU - Watts, Duncan J.
N1 - Publisher Copyright:
Copyright © 2024 the Author(s). Published by PNAS. This article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
PY - 2024/2/20
Y1 - 2024/2/20
N2 - In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals-what a user would have viewed in the absence of algorithmic recommendations-and hence cannot disentangle the effects of the algorithm from a user's intentions. Here we propose a method that we call “counterfactual bots” to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users' consumption patterns with “counterfactual” bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender “forgets” their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.
AB - In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals-what a user would have viewed in the absence of algorithmic recommendations-and hence cannot disentangle the effects of the algorithm from a user's intentions. Here we propose a method that we call “counterfactual bots” to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users' consumption patterns with “counterfactual” bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender “forgets” their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.
KW - algorithmic audits
KW - experiment design
KW - online extremism
KW - recommender systems
UR - http://www.scopus.com/inward/record.url?scp=85185239913&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185239913&partnerID=8YFLogxK
U2 - 10.1073/pnas.2313377121
DO - 10.1073/pnas.2313377121
M3 - Article
C2 - 38349876
AN - SCOPUS:85185239913
SN - 0027-8424
VL - 121
JO - Proceedings of the National Academy of Sciences of the United States of America
JF - Proceedings of the National Academy of Sciences of the United States of America
IS - 8
M1 - e2313377121
ER -