TY - JOUR
T1 - How to Be Helpful to Multiple People at Once
AU - Gates, Vael
AU - Griffiths, Thomas L.
AU - Dragan, Anca D.
N1 - Funding Information:
Special thanks to Professor Anant Sahai for suggesting the conditions in Experiment 1, and we thank various members of the Center for Human‐Compatible Artificial Intelligence for helpful comments. This work was funded in part by NSF grant 1456709 to T.L.G. and N.I.H. U.C. Berkeley Neuroscience Training Program Grant to V.G.
Publisher Copyright:
© 2020 Cognitive Science Society, Inc.
PY - 2020/6/1
Y1 - 2020/6/1
N2 - When someone hosts a party, when governments choose an aid program, or when assistive robots decide what meal to serve to a family, decision-makers must determine how to help even when their recipients have very different preferences. Which combination of people’s desires should a decision-maker serve? To provide a potential answer, we turned to psychology: What do people think is best when multiple people have different utilities over options? We developed a quantitative model of what people consider desirable behavior, characterizing participants’ preferences by inferring which combination of “metrics” (maximax, maxsum, maximin, or inequality aversion [IA]) best explained participants’ decisions in a drink-choosing task. We found that participants’ behavior was best described by the maximin metric, describing the desire to maximize the happiness of the worst-off person, though participant behavior was also consistent with maximizing group utility (the maxsum metric) and the IA metric to a lesser extent. Participant behavior was consistent across variation in the agents involved and tended to become more maxsum-oriented when participants were told they were players in the task (Experiment 1). In later experiments, participants maintained maximin behavior across multi-step tasks rather than shortsightedly focusing on the individual steps therein (Experiment 2, Experiment 3). By repeatedly asking participants what choices they would hope for in an optimal, just decision-maker, and carefully disambiguating which quantitative metrics describe these nuanced choices, we help constrain the space of what behavior we desire in leaders, artificial intelligence systems helping decision-makers, and the assistive robots and decision-makers of the future.
AB - When someone hosts a party, when governments choose an aid program, or when assistive robots decide what meal to serve to a family, decision-makers must determine how to help even when their recipients have very different preferences. Which combination of people’s desires should a decision-maker serve? To provide a potential answer, we turned to psychology: What do people think is best when multiple people have different utilities over options? We developed a quantitative model of what people consider desirable behavior, characterizing participants’ preferences by inferring which combination of “metrics” (maximax, maxsum, maximin, or inequality aversion [IA]) best explained participants’ decisions in a drink-choosing task. We found that participants’ behavior was best described by the maximin metric, describing the desire to maximize the happiness of the worst-off person, though participant behavior was also consistent with maximizing group utility (the maxsum metric) and the IA metric to a lesser extent. Participant behavior was consistent across variation in the agents involved and tended to become more maxsum-oriented when participants were told they were players in the task (Experiment 1). In later experiments, participants maintained maximin behavior across multi-step tasks rather than shortsightedly focusing on the individual steps therein (Experiment 2, Experiment 3). By repeatedly asking participants what choices they would hope for in an optimal, just decision-maker, and carefully disambiguating which quantitative metrics describe these nuanced choices, we help constrain the space of what behavior we desire in leaders, artificial intelligence systems helping decision-makers, and the assistive robots and decision-makers of the future.
KW - Assistive artificial intelligence
KW - Fairness
KW - Maximin
KW - Modeling
KW - Preferences
UR - http://www.scopus.com/inward/record.url?scp=85085157823&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085157823&partnerID=8YFLogxK
U2 - 10.1111/cogs.12841
DO - 10.1111/cogs.12841
M3 - Article
C2 - 32441390
AN - SCOPUS:85085157823
SN - 0364-0213
VL - 44
JO - Cognitive science
JF - Cognitive science
IS - 6
M1 - e12841
ER -