The algorithms at the moment do not dictate sentences in any rule of law. But they do help judges in various countries make decisions. The best known system of this type is Compas, widely used in the United States, which informs togades of the probabilities of recidivism of prisoners at the time of granting them parole and prison permits or benefits. Now, an analysis of the MIT Technology Review has concluded that this tool discriminates against certain minorities, specifically blacks. The report agrees on this point with previous studies, but also points to another disturbing conclusion: that it is practically impossible to correct the biases of the system.
Compas, the system developed by the Northpointe company that is applied in the States of New York, California, Florida or Wisconsin, among other jurisdictions, uses machine learning (machine learning), an artificial intelligence technique that is based on perfecting the algorithm as it learns from the data it processes. The profiles of the subjects analyzed by Compas are made up of a questionnaire consisting of 137 questions, ranging from whether they have suffered family abuse or have a criminal record to whether the subject has ever skipped classes or if they feel discouraged . Some questions are filled in by the official with data from the Administration; others must be answered by the person in question. The answers are weighted according to a series of criteria and at the end there is a score of 0 to 10. If it is greater than seven, the risk of recidivism is high. A similar system operates in Catalonia, RisCanvi, although in its case it does not resort to artificial intelligence, but to statistical techniques such as multiple regression.
An investigation by ProPublica published in 2016 cast doubt on Compas. The experiment compared the risk assessments of more than 7,000 detainees in one Florida county with how often or not they actually relapsed. His conclusions were devastating: the system’s hit rate was similar regardless of whether it was applied to a white or black person, but the failures penalized blacks more, who were almost twice as likely as whites to be misclassified as potential repeat offenders.
The impact of the ProPublica study still resonates today. After that work came others that yielded similar results, placing it as a manual example of the so-called algorithmic biases (the mistakes of the machines that discriminate against specific population groups). His ability to be successful was also evaluated, leaving aside the racial question. And it was seen that it is no better than that of human beings: that of the algorithm and that of officials is around 65% (that is, they are wrong 35% of the time).
The new analysis of MIT Technology Review re-examines the database used by ProPublica (7,200 profiles in Broward County rated by Compas between 2013 and 2014), this time focusing on a random sample of 500 white and black defendants. The conclusions are the same: “At the default threshold of Compas between 7 and 8 [riesgo alto de reincidencia]16% of black defendants who are not re-arrested have been unnecessarily incarcerated, while the same is true for only 7% of white defendants. There is a similar difference in many jurisdictions in the US, partly due to the country’s history, where the police disproportionately targeted minorities, ”writes Karen Hao, author of the analysis.
Can an algorithm be fair?
Is the algorithm racist then? Or, rather, is the person who programmed it a racist? It doesn’t have to. The predictions that Compas makes reflect the data that was used to make them. If the proportion of black defendants who end up being arrested is greater than that of whites, it is normal that the expected arrests that the algorithm will project are also greater for that group. “They will have a higher risk score on average, and a higher percentage of them will be assessed as high risk, both correctly and incorrectly,” Hao adds.
This researcher’s analysis also indicates that changing the algorithm does not help to correct the overrepresentation of the black population among those who are more likely to be evaluated as having a high risk of recidivism. Unless the system is altered to take into account race, which in the US (and many countries) is illegal. Conclusion: there is no way for algorithms to act fairly.
For Antonio Pueyo, professor of Psychology of Violence at the University of Barcelona and director of the research group that developed RisCanvi, this statement does not make much sense. “Both human and algorithmic decisions have biases. The reasons are varied: because humans feed the algorithm with inadequate data, because the algorithm has not been well programmed or because biased criteria are included in it, such as differential cut-off points for certain groups ”.
The very questions that are asked to fill in Compas are anything but innocent. One of them, for example, refers to whether the subject has ever been identified by the police. Any black citizen who grew up in the United States in a predominantly black neighborhood has surely had this experience, something not so common among white citizens raised in predominantly white neighborhoods. These same problems also tend to surface in the so-called police algorithms, those used by security forces to predict where a crime is going to take place.
What about the use made of the report that Compas produces? Lorena Jaume-Palasí, an expert in algorithm ethics and advisor to the European Parliament and the Government of Spain, among others, believes it is as important to analyze the algorithm as the use made after it. As he told EL PAÍS, there are studies that show that US judges who use Compas do not always pay attention to the risk of recidivism posed by the system. The algorithm may tell you that the subject is low risk and, being a black (and the racist judge), it will not grant the conditional as well.
Another criticism that the Compas system tends to receive is that the guts of the algorithm are unknown: what scales have the answers to which questions, how is the process that leads to the final assessment of each individual. It is not an exclusive peculiarity of algorithms dedicated to justice. Frank Pasquale, a professor at Brooklyn Law School and an expert in artificial intelligence law, addressed the sector’s lack of transparency in his book The Black Box Society already in 2015.
What is there to do then with the algorithms applied to justice? Eradicate them? Keep them? Perfect them? Pueyo is positioned in this last opinion. “Statistical or artificial intelligence techniques are just that, techniques, and therefore they can be improved. Trying to eliminate them from professional practice due to ideological prejudices is not reasonable, since on a day-to-day basis they show that their advantages outweigh their limitations ”.
Hao, the author of the MIT analysis, does not share that view. In the article he quotes the so-called Blackstone ratio, in reference to the English jurist William Blackstone, who wrote at the end of the 18th century that it is preferable to let 10 guilty people escape than one innocent person to suffer. Under that prism, there is no algorithm that is worth.
You can follow EL PAÍS TECNOLOGÍA at Facebook and Twitter or sign up here to receive our newsletter semanal.
elpais.com