TY - GEN
T1 - Strategyproofing Peer Assessment via Partitioning
T2 - 10th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022
AU - Dhull, Komal
AU - Jecmen, Steven
AU - Kothari, Pravesh
AU - Shah, Nihar B.
N1 - Publisher Copyright:
© 2022, Association for the Advancement of Artificial Intelligence.
PY - 2022
Y1 - 2022
N2 - Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment, such as peer grading of home works, grant proposal review, conference peer review of scientific papers, and peer assessment of employees in organizations. Since an individual’ sown work is in competition with the submissions the yare evaluating, they may provide dishonest evaluations to increase the relative standing of their own submission. This issue is typically addressed by partitioning the individuals and assigning them to evaluate the work of only those from different subsets. Although this method ensures strategy proofness, each submission may require a different type of expertise for effective evaluation. In this paper, we focus on finding an assignment of evaluators to submissions that maximizes assigned evaluators’ expertise subject to the constraint of strategy proofness. We analyze the price of strategy proofness: that is, the amount of compromise on the assigned evaluators’ expertise required in order to get strategy proofness. We establishesveral polynomial-time algorithms for strategy proof assignment long with assignment-quality guarantees. Finally, we evaluate the methods on a dataset from conference peer review.
AB - Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment, such as peer grading of home works, grant proposal review, conference peer review of scientific papers, and peer assessment of employees in organizations. Since an individual’ sown work is in competition with the submissions the yare evaluating, they may provide dishonest evaluations to increase the relative standing of their own submission. This issue is typically addressed by partitioning the individuals and assigning them to evaluate the work of only those from different subsets. Although this method ensures strategy proofness, each submission may require a different type of expertise for effective evaluation. In this paper, we focus on finding an assignment of evaluators to submissions that maximizes assigned evaluators’ expertise subject to the constraint of strategy proofness. We analyze the price of strategy proofness: that is, the amount of compromise on the assigned evaluators’ expertise required in order to get strategy proofness. We establishesveral polynomial-time algorithms for strategy proof assignment long with assignment-quality guarantees. Finally, we evaluate the methods on a dataset from conference peer review.
UR - http://www.scopus.com/inward/record.url?scp=85175729608&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85175729608&partnerID=8YFLogxK
U2 - 10.1609/hcomp.v10i1.21987
DO - 10.1609/hcomp.v10i1.21987
M3 - Conference contribution
AN - SCOPUS:85175729608
SN - 9781577358787
T3 - Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
SP - 53
EP - 63
BT - HCOMP 2022 - Proceedings of the 10th AAAI Conference on Human Computation and Crowdsourcing
A2 - Hsu, Jane
A2 - Yin, Ming
PB - Association for the Advancement of Artificial Intelligence
Y2 - 6 November 2022 through 10 November 2022
ER -