Technology | Feminism

Artifical intelligence and its biases

It is well known that algorithms have a way of reinforcing social prejudices, but what can be done about it? Scholar Alex D. Ketchum takes a feminist look at the issue

Interview by Atifa Qazi

Alex Ketchum, how sexist is the AI scene?

There has been a lot of research by scholars, activists, and artists that shows how data sets of AI systems are sexist, racist, homophobic and transphobic, as well as ableist and classist. Even if you change the data sets, the problem is embedded in the working conditions: Often AI spaces tend to be white, cis, male, and heterosexual, which leads to diverse workers experiencing various forms of workplace aggression, sexual harassment, and being exposed to racist remarks.

Training sets are also often based on a binary understanding of gender

How does that shape the tech products we use?

In North America for example, many employers use automated systems to filter CVs. Among these, Amazon’s AI recruiting tool, for example, had a major flaw: if the CV indicated that the applicant was a woman, the system automatically rejected it. This happened because the system was trained on existing and mostly male employee CVs, thus unintentionally reinforcing a preference for hiring more white men. In the end, Amazon scrapped it.

Moreover, data sets used for AI tools in health care often underrepresent certain groups. The current state of AI used to identify certain skin cancers, for example, is mostly trained on data from white skin and doesn't work well for people with darker skin tones. Training sets are also often based on a binary understanding of gender; scholars such as Morgan Klaus Scheuerman and Dr. Sarah Costanza-Chock have shown how this can negatively impact trans, non-binary, and gender queer folks.

A lot of my own work around AI has been about uplifting voices that are often not listened to in the AI community

Has there been any progress on combating these issues?

Between 2019 and 2021, many major companies embraced the idea of AI ethics, particularly in response to the Black Lives Matter Movement and influential work by figures like the computer scientists Joy Buolamwini and Timnit Gebru on gender and racial biases. A lot of these companies created so-called AI ethics teams. Timnit Gebru was the co-lead of the ethical AI team at Google and once she exposed the environmental and ethical implications of the company’s work, Google's response was restrictive, stating she shouldn't criticise the company. This highlights that creating teams and hiring critical-thinking individuals isn't enough if their insights aren't taken seriously. How much is going to change if you don't listen to them?

Many people use ChatGPT and don't consider the footprint of the technology

You are the director of the Just Feminist Tech and Scholarship Lab and the founder of the Disrupting Disruptions: the Feminist and Accessible Publishing, Communications, and the Tech Speaker and Workshop Series. How do you address these issues?

In our Disrupting Disruptions series, scholars, creators, students, and activists come together to think critically about what it means to have just, feminist technology and scholarship.

Sometimes it's about raising awareness of the environmental impacts of AI. Many people use ChatGPT and don't consider the footprint of the technology. Some individuals emphasise the protection of Indigenous data. For instance, Peter Lucas Jones and Keoni Mahelona have spoken about their work with the Māori community to preserve their language through an AI system.

I think a lot of my own work around AI has been about uplifting voices that are often not listened to in the AI community: women, people of colour, queer people, disabled people. That way they can share their perspectives because so often most of the discourse involves white straight dudes.

Feminism has frameworks for analysing power dynamics and an attentiveness to gender, racial, and class dynamics within labour and resource extraction

In your opinion, which AI tools are incompatible with a feminist approach?

There's a movement called “tech won't build it”. It features tech workers who were developing different technologies for companies such as Google for a certain purpose. They didn't realise that Google was making contracts with military vendors without their consent. It shows even technologies developed for good purposes can be misappropriated or used in ways you don't imagine, like making tools that optimise killing people. It's like asking, is there such thing as a feminist gun? I guess there are some people who could argue that, but I don't think that we need to optimise military death.

Given the substantial environmental impact of AI, how can a feminist approach to it be justified?

The AI field has huge environmental impact and labour issues, including the amount of water used to cool data centres, mining materials for the microprocessors, and the exploitation of workers. Let’s just say, I’m very apprehensive about it.

From my perspective, what we see in AI systems is not intelligence, it is basically maths

However, there are ways to mitigate some of the harm. Where data centres are placed can be a big issue for instance, if they are located in colder climates, they require less cooling water. I think these are large questions and feminism has some tools to address these challenges, such as frameworks for analysing power dynamics and an attentiveness to gender, racial, and class dynamics within labour and resource extraction. However, it is important to incorporate toolkits and frameworks from other social justice, anti-racist, Indigenous, and environmental movements.

And can these problems be addressed while continuing AI development, or is the solution to abstain from it altogether?

From my perspective, what we see in AI systems is not intelligence, it is basically maths. A lot of feminist science is trying to value different ways of knowing, like embodied knowing, life experiences, and it also about decentering certain kinds of hierarchies. Feminist approaches to AI would constantly question the types of knowledge and methods of understanding that are valued while remaining aware of the dynamics of power.

“Let's slow everything down and not just put tools in place without really doing research and studying how they can impact peoples’ lives”

The American scientist Caroline Sinders, for example, has created the feminist data set project which interrogates every step of the AI process that includes data collection, data labelling, data training, selecting an algorithm to use and the algorithmic model. Rather than using extractive labour practices from click workers, this process compensates workers with a living wage.

I think it makes sense to take the approach of “let's slow everything down and not just put tools in place without really doing research and studying how they can impact peoples’ lives.” Some have gone a step further, such as artists Stephanie Dinkins and Suzanne Kite, who have thought about the ways we can make AI and other machines our kin.

The interview was conducted by Atifa Qazi as part of the Heinrich Böll Foundation's Transatlantic Fellowship