INVESTIGADORES
RIVERA LOPEZ Eduardo Enrique
congresos y reuniones científicas
Título:
Predictive algorithms and fairness
Autor/es:
EDUARDO RIVERA LÓPEZ
Reunión:
Congreso; International Congress on Logic, Methodology and Philosophy of Science and Technology; 2023
Resumen:
The ethical challenges of artificial intelligence are pressing already today and will be still more pressing in the near future. One aspect of these challenges concerns the increasing capacity of computational algorithms to predict human behavior. This kind of prediction is, of course, not new. In order to make decisions that affect other people, we make predictions about those people all the time. To mention a few examples: A bank must decide whether to grant a loan to a customer, a credit card whether to award it to someone, a life insurance company what the premium will be for a certain customer, a judge whether or not to grant parole to a convicted person (or pretrial detention to a defendant), an employer whether to offer a job to a candidate. At least in part, the bank considers what is the probability that the customer will repay the loan; the judge considers what is the probability that the convicted person will commit a crime while on probation; or the employer considers what is the probability that the candidate will adequately perform the assigned tasks.All these probability judgments are predictive judgments. With the development of computing and artificial intelligence, which are capable of processing enormous amounts of data, these predictive judgments are delegated to algorithms. These computational mechanisms, by processing enormous amounts of information, supposedly achieve better, or more accurate predictions.A question that has been raised is whether these algorithms can be unfair to some individuals in virtue of their membership to a certain group. The most famous and discussed example of this type of possible unfairness is COMPAS, an algorithm designed to predict recidivism in convicted persons. COMPAS is used in several courts in the United States to make decisions about granting parole. In May 2016, the NGO ProPublica published a study accusing COMPAS of giving biased results detrimental to African-Americans (Angwin et al. 2016). ProPublica found that the rate of false positives and false negatives was different in both groups. However, the company (Northpointe) responded to this objection by arguing that the result pointed out by ProPublica is due to the fact that the base rates of the two groups are different (i.e., the probability of recidivism of members of each group). What is relevant is not the differential rate of false positives or false negatives, but that the predictive value of the algorithm is the same for members of each group. And these values, the company argues, are similar for white and African-American people (Dieterich 2016). One might wonder how is it possible for an algorithm to have equal accuracy (or equal predictive power) between the two groups and yet have unequal false positive and false negative rates. Despite being counter-intuitive, it has been shown that, not only is it compatible for both to occur, but it is not possible to have, under realistic conditions, equal accuracy or predictive power and, at the same time, equal false positive and false negative rates among groups (Chouldechova 2017; Kleinberg 2016).My aim in this paper is to advance ideas about how these kinds of algorithms should be construed in order to avoid discriminatory biases. More precisely, I ask whether it is plausible (against some authors like Hedden 2021) to sacrifice some degree of predictive power in order to fulfill other requirements of fairness.