Ariadna Font, creator of the algorithmic ethics area of ​​Twitter: “Criticism does not scare us” | Digital Transformation | Technology


Ariadna Font.
Ariadna Font.FLAMINIA PELAZZI

The Spanish Ariadna Font (Barcelona, ​​1974), alias @quicola, came to Twitter six months before the pandemic attracted by its values ​​of “transparency, kindness, inclusion and care for the other.” From one day to the next, he went from traveling non-stop to telecommuting. She came from IBM, where she served as Director of Product Development and Director of Design, Data and Artificial Intelligence (AI) at IBM Watson, and previously Director of Emerging Technologies at IBM Research. He also worked in some startup, after obtaining a doctorate in Language and Information Technology from Carnegie Mellon University (USA).

At Twitter, Font is the engineering director for the machine learning platform (an AI technique). Its purpose is to improve Twitter by enabling “advanced and ethical” AI. For this reason, he directs the efforts of the social network around a responsible AI and created a few months ago the META area (not to be confused with the new name of Facebook), its ethics, transparency and accountability team. This interview takes place through the computer screen, between Barcelona and New York.

What is your job?

I am the engineering director of the machine learning platform, which is present in almost all areas of what each person sees on Twitter: what appears when they enter and in what order, what notifications they receive, the results of a search … It also intervenes in other fields that are not so visible, such as the detection of tweets that may be violating the platform’s policies. Our purpose is to cultivate a healthy public conversation and that the people who are building the technology behind it have the tools to do so. That, by design and from the start, the right questions can be asked at the right time. This implies creating a safe space where everyone can express themselves freely, in a culture of respect. That is complicated, and in my work it translates into creating algorithms that are as open as possible, so that we know how they impact the user experience.

How?

Training people, sharing good practices and documenting everything. We are designing a new version of the platform and we have the opportunity to ask ourselves certain questions. We analyze and evaluate each new functionality, and classify the associated risks. The next thing is to conduct internal audits. We have already done some, but we want to do it systematically and in a much more rigorous way. On the other hand, how we apply the results of the research we carry out is key. For example, a year ago, there were comments from people on Twitter about problems with our image cropping algorithm. After a internal investigation, we saw that the algorithm was technically accurate and we did not find any bias. The problem was that an automated system and not the user decided how to cut their image. Our conclusion was that we should not decide it for the user, so we changed it.

See also  Wealth of the richest 1 percent more than 230 times that of the poorest 10 percent, ONS says

You created the META team to ensure accountability and transparency of Twitter AI. What is your strategy?

We want to be proactive, not just reactive. Eliminate biases and prevent them. We work mainly in three areas: transparency, which is not very fashionable in the corporate sphere; State-of-the-art research and detailed analysis, tools and standards, starting with risk management. We want to be worthy of the trust of users. It is not very common for black boxes to be opened, but people have to understand how that AI works and feel that they control their experience.

A very important part of our task is to communicate what is happening within those systems and to increase transparency and explicability. We do not share only what makes us look good. On the contrary, when it does not go well is when we can learn more. That is why one of our pillars is to help people know how we make algorithmic decisions and be publicly accountable. How? Sharing the results and how we are mitigating cases of algorithmic injustice. And do it not only with a scientific article but with explanations that anyone can understand.

They just did it with an investigation in Spain, France, the United Kingdom, the United States, Canada and Japan, which concludes that the platform favors tweets from the political and media right.

It is one example of many. We want to create spaces in which users can feel safe to be able to criticize. Everything we learn we want to use to improve not only our algorithms but our operation. We want to be open and flexible to change the user experience. We share this research to move forward and so that others can learn from it. That the academic community and industry can benefit from them.

It talks about engaging the community on that path to algorithmic justice.

We take feedback from Twitter users very seriously. We do not want to simply communicate in a one-way fashion through our blog. We need a two-way conversation and listen to users and the scientific community. In this sense, we are analyzing the best way to listen to everyone. We have several initiatives where users can put labels, say what they consider acceptable or not.

On the other hand, we are very interested in working with the scientific and hacker community. We are always looking for a way to open up any of our reviews so they can ask questions and contribute ideas. From Twitter we are aware that we do not have the ability to ask or resolve all questions. We see the scientific community as the extension of our team.

See also  Why there's more to Antigua than sun, sea and sand – Scotland on Sunday Travel

They have formalized some collaborations with both ethical hacker networks and academics.

We have people who dedicate one day a week to work with us and also others who stay for 3 or 4 months. This is the case with Sarah T. Roberts, who is investigating user agency in algorithmic decisions. This is something we want to do more and more, collaborate productively and integrate the external vision of these people.

How to collaborate without creating dependencies? How to avoid the suspicion that Twitter wants to buy from these people?

It is very difficult. Perhaps spending one day a week with us can create that dependency, which is contrary to our interest. Your independence is precisely the benefit for us: your insight and external feedback. We want that critical community to help us improve. Criticism does not scare us. We know that we are going to make mistakes. Perfection does not exist.

Do you plan to conduct external audits?

We have talked about it many times and it would be ideal. The challenge is to be able to do it without having to share or reveal user data. We have to create a data environment with technologies that preserve your privacy. It is a preliminary step that seems easy, but it involves an entire infrastructure and a lot of work. Whenever we can, we open the code so that people can analyze it, but it is a process that is not automatic. We are applying what is known as “differential privacy” to, for example, analyze possible biases without using demographic data. For this, among other things, we have hired a reference, Lea Kissner, who leads the privacy team.

How does Twitter protect user privacy?

Privacy is in our DNA. Only by protecting it can we promote an open and healthy public conversation. We believe that privacy is a fundamental right, not a privilege. Twitter allows users to make informed decisions about the data they share with us. We must be transparent and provide meaningful control over what data is collected, how it is used, and when it is shared. We continue to invest more in the equipment, technology and resources to support this critical work.

In addition to Kissner, he has hired many other women. To lead META he recruited the specialist in applied algorithmic ethics Rumman Chowdhury, and boasts that his team of approximately 100 people are already 50% women.

I have always focused on creating diverse teams, something especially relevant in the area of ​​responsible AI. We want to attract the best talent in the industry and have that diversity not only in gender or race, but in training, culture, life experiences and ways of seeing things.

See also  Volcano boarding craze launches adrenaline junkie tourism boom in Nicaragua

What is your team doing to weigh algorithms and prevent them from rewarding disinformative, extremist, or inflammatory content?

When we identify content that violates Twitter rules, we take steps to monitor compliance and to prevent such content from being amplified. For example, we may tag or request the removal of tweets that violate our civic integrity policy, the COVID-19 disinformation policy O our synthetic or manipulated media policy. Tagged tweets have limited visibility in searches, responses, and timelines, and the Twitter algorithm does not recommend them.

We also want to give people meaningful control over their experience. To do this, they can choose whether or not they want to select an algorithmic timeline, which means that their home timeline shows the tweets of the accounts that person follows on Twitter, as well as recommendations for content that they might be interested in, based on the accounts with which they interact the most.

Algorithmic amplification is less a function of the particular algorithm and more a function of how people interact with these systems. Algorithms can reflect and replicate harmful social biases. That’s why, as META investigates Twitter’s algorithms, we publicly release the findings and work to develop solutions.

In what other ways could these problems be tackled?

We prioritize taking proactive action against content that violates our standards, including abusive content. That means people on Twitter don’t experience the abuse or harm before we take action. In fact, 65% of the abusive content we trigger is proactively identified for later human review, rather than relying on user reports. We continue to improve the accuracy of our machine learning to better detect and take action on content that violates our policy. As a result, accounts triggered by abusive content have increased 142%.

Would a more ‘organic’ Twitter be possible?

Since 2016, people on Twitter can toggle between “Start” (algorithmically ranked start timeline) and “Most Recent Tweets” (where the most recent Tweets are displayed as they are posted). We will start an experiment soon to allow toggle between the two. This allows any Twitter user to view the content they like in the way that suits them best. Furthermore, we are working to decentralize the Twitter experience through new features and products that provide a more customizable and personalized experience.

You can follow EL PAÍS TECNOLOGÍA at Facebook and Twitter or sign up here to receive our newsletter semanal.




elpais.com

Related Posts

George Holan

George Holan is chief editor at Plainsmen Post and has articles published in many notable publications in the last decade.

Leave a Reply

Your email address will not be published.