TY - GEN
T1 - The role of randomness and noise in strategic classification
AU - Braverman, Mark
AU - Garg, Sumegha
N1 - Publisher Copyright:
© Mark Braverman and Sumegha Garg; licensed under Creative Commons License CC-BY
PY - 2020/5/1
Y1 - 2020/5/1
N2 - We investigate the problem of designing optimal classifiers in the “strategic classification” setting, where the classification is part of a game in which players can modify their features to attain a favorable classification outcome (while incurring some cost). Previously, the problem has been considered from a learning-theoretic perspective and from the algorithmic fairness perspective. Our main contributions include Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary. Showing that in many natural cases, the imposed optimal solution (in terms of efficiency) has the structure where players never change their feature vectors (and the randomized classifier is structured in a way, such that the gain in the probability of being classified as a “1” does not justify the expense of changing one’s features). Observing that the randomized classification is not a stable best-response from the classifier’s viewpoint, and that the classifier doesn’t benefit from randomized classifiers without creating instability in the system. Showing that in some cases, a noisier signal leads to better equilibria outcomes – improving both accuracy and fairness when more than one subpopulation with different feature adjustment costs are involved. This is particularly interesting from a policy perspective, since it is hard to force institutions to stick to a particular randomized classification strategy (especially in a context of a market with multiple classifiers), but it is possible to alter the information environment to make the feature signals inherently noisier.
AB - We investigate the problem of designing optimal classifiers in the “strategic classification” setting, where the classification is part of a game in which players can modify their features to attain a favorable classification outcome (while incurring some cost). Previously, the problem has been considered from a learning-theoretic perspective and from the algorithmic fairness perspective. Our main contributions include Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary. Showing that in many natural cases, the imposed optimal solution (in terms of efficiency) has the structure where players never change their feature vectors (and the randomized classifier is structured in a way, such that the gain in the probability of being classified as a “1” does not justify the expense of changing one’s features). Observing that the randomized classification is not a stable best-response from the classifier’s viewpoint, and that the classifier doesn’t benefit from randomized classifiers without creating instability in the system. Showing that in some cases, a noisier signal leads to better equilibria outcomes – improving both accuracy and fairness when more than one subpopulation with different feature adjustment costs are involved. This is particularly interesting from a policy perspective, since it is hard to force institutions to stick to a particular randomized classification strategy (especially in a context of a market with multiple classifiers), but it is possible to alter the information environment to make the feature signals inherently noisier.
KW - Fairness
KW - Noisy features
KW - Randomized classification
KW - Strategic classification
UR - http://www.scopus.com/inward/record.url?scp=85092767053&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092767053&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.FORC.2020.9
DO - 10.4230/LIPIcs.FORC.2020.9
M3 - Conference contribution
AN - SCOPUS:85092767053
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 1st Symposium on Foundations of Responsible Computing, FORC 2020
A2 - Roth, Aaron
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 1st Symposium on Foundations of Responsible Computing, FORC 2020
Y2 - 1 June 2020
ER -