The Omnipotent Algorithm Fallacy and Other Misconceptions We Have About These Systems | Technology

SEAN GLADWELL (Getty Images)

When Sun Tzu remarked on The Art of War the need to know one’s enemy well, there were no computational algorithms to worry about. Two millennia we have seen them assign greater possibilities of recidivism to prisoners of minority groups, fire 150 people in a second and even participate in armed conflicts. And we don’t know them. A study carried out by researchers at the University of Amsterdam with a sample of 2,106 people has found that more than half of the respondents accept that algorithms are independent of human activity, have no bias, have the same level of critical reasoning and intelligence as humans, and they will replace us. A not insignificant 43% think that these systems can solve “all the problems of society”.

“We wanted to know if people have a correct idea of ​​what algorithms are and what they do, because they come across them every day: on social media, on their phones, when they watch TV …”, explains Brahim Zarouali, researcher focused on the study of persuasive communications and technologies. What they did not expect was to find such a level of ignorance. “This seemed really alarming to us,” he says. Furthermore, the phenomenon is more pronounced in certain demographic groups, with these misconceptions more prevalent among older people, those with lower levels of education, and women.

The research focuses on the algorithms that intervene in the information consumption platforms and that can personalize and adapt the information that is shown to each person, but the teacher does not rule out that the same confusions that have been identified in this case are extended to others applications of the same systems.

What is the cost of these gaps in terms of knowledge of systems that are increasingly present in our lives? “Our argument is that they could increase the digital divides in our society. It is very important that we all have the same skills and knowledge to benefit from technology and algorithms ”, Zarouali reasons.

Out of sight

Zarouali and his team place the origin of the problem in the intangible nature of these systems, which operate in the background, without anyone seeing their ins and outs and, in many cases, as black boxes whose decisions cannot be explained. “This makes it difficult for the general population to develop a correct idea of ​​what algorithms can do and how they work,” summarizes the researcher.

What should we know about them? The study takes as a starting point some basic ideas. If we look at Tarleton Gillespie’s definition, algorithms can be portrayed as coded procedures to transform large amounts of input data into the desired result through specific calculations. A condensed and more intentional version of this description is in the words of Cathy O’Neall, “algorithms are opinions locked in mathematics.”

It is also important for researchers to consider the context in which these systems often operate, which have become critical parts of the competitive advantage of many technology companies. “This explains why many companies are reluctant to expose their algorithmic codes to the outside world,” they point out. “Algorithms can not only show the biases of those who designed and operate them, but also the values ​​and preferences of the companies that offer them.” As for his ability to match our intelligence, replace us or solve any problem, the reality is that his abilities are, at least for now, limited to the very efficient performance of specific tasks.

Algorithmic literacy, Zarouali explains, is key so that we can take an active role in scrutinizing these systems and resist the judgment of those who are problematic for us or benefit from the services of those we see aligned with our interests. Not surprisingly, despite the alarming damage they can cause in their role as enemies, these tools can also help us to predict a stroke two years in advance, recover works of art that were believed lost or minimize the risk of contagion from covid-19. “It is important to have a critical digital citizenship in all layers of society,” says the teacher.

Researchers insist that the persistence of these misconceptions can show their effects in two ways: on the one hand, overzealousness can push us to unjustifiably reject them on the basis of a dystopian vision of the future; on the other, the disproportionate trust in them can contribute to reinforcing stereotypes and inequalities, and to the dissemination of manipulated content such as false hyper-realistic videos (deepfakes). “The main solution is digital literacy education. In these initiatives it is important that people are taught what algorithms are and that they are offered protection strategies to deal with their harmful consequences. In the same way, they should be empowered so that they can also benefit from them ”.

You can follow EL PAÍS TECNOLOGÍA at Facebook and Twitter or sign up here to receive our newsletter semanal.

Related Posts

George Holan

George Holan is chief editor at Plainsmen Post and has articles published in many notable publications in the last decade.

Leave a Reply

Your email address will not be published. Required fields are marked *