From left to right: Rachel Coldicutt, David Leslie, Rumman Chowdhury, Noura Al Moubayed and Wendy Hall
The Royal Society / Debbie Rowe
It’s the second day Women and the future of science conference at the Royal Society in London, but I’m finding it increasingly difficult to focus on the speakers because my AI transcription software – which is supposed to make my life easier – keeps insisting that I’ve spelled someone’s name wrong. Every time Julia is mentioned, she writes Juliana. The irony is not lost on me: this is a session about artificial intelligence, and specifically how women are being erased by the latest AI technologies.
This is much bigger than the now-familiar idea that AI algorithms carry the biases of the data sets they are trained on, including gender bias.
Instead, it focused on conferences chaired by computer scientists Wendy Hallis trying to address a more fundamental problem: the fact that new AI technologies that will have a transformative effect on society as a whole are being designed almost exclusively by men.
Technology has always been a predominantly bad sector. in the United Kingdom only 25 percent of computer science majors are women. But in recent years—and while generative artificial intelligence has blossomed—Silicon Valley has become increasingly hostile to women.
“There’s been a regression in the last two years,” he says David Lesliewho is in charge of ethics and responsible innovation research at the Alan Turing Institute. “The question of whether the Trump administration has caused intergenerational damage to women in science is indisputable. We are living in a time of retrospection.”
Last year, US President Donald Trump issued an executive order targeting so-called awakened AI, recommending that the US National Institute of Standards and Technology revise its AI risk management framework to “eliminate references to misinformation, diversity, equity and inclusion, and climate change”.
One panelist, Rumman Chowdhurydata scientist and former US science ambassador for artificial intelligence, was in charge of ethics and accountability at Twitter before Elon Musk took over and fired his team. He points out that the concept of awakened artificial intelligence was born out of misogynistic attitudes in Silicon Valley before Trump’s order.
When Hall was asked to describe an AI without women, several panelists say we’re already there. “I’m in a frontier AI world, and that’s an AI world without women,” says Chowdhury. This is a sentiment that is echoed Rachel Coldicuttwhich examines the social impacts of new and emerging technologies. “If you think about what a world without women in AI looks like, I think that’s what we have at the moment. It’s not a fantasy at all.”
It should go without saying: it matters. There is a long history of technology being developed for male bodies and needs, from crash test dummies to office air conditioning, astronaut space suits and the vast majority of medical research. This is known as the gender data gap, and the effects can range from unpleasant to life-threatening.
Artificial intelligence will affect everything from the work we do to the way we educate our children and the diseases we can treat. But currently only 2 percent of venture capital funding goes to women, Chowdhury points out. Meantime, less than 1 percent research and innovation in healthcare is directed towards the health conditions of women. “We need to make technology work for 8 billion people, not eight billionaires,” says Coldicutt.
what needs to be done? With hundreds of years of biased data baked into current AI models, Coldicutt doesn’t believe they can be fixed. “We need alternative models,” he says. It’s also an opportunity to change the focus of what these models do. “It’s about cultivating models … that prioritize caring for people, for the planet.”
Chowdhury, who co-founded Humane Intelligence, a nonprofit that helps companies make AI systems more accountable and fair, thinks part of the problem is that many current AI developments are built on a false sense of urgency, focusing on the existential risk AI poses to jobs or even to humanity. If the story is that your house is on fire, “you’re not like, ‘What happened to my mother’s jewelry?'” she says. If people feel they don’t have time, they’ll skip anything that feels extraneous to them, including variety, he says.
As for the next generation, we need to address the economic and political framework through which AI develops if we want to encourage young people to develop AI for social good, says Leslie: “We need to start with the basics, start with the transformation of incentives.”
Ultimately, we may also need to rethink our very definition of intelligence in the context of artificial intelligence to include broader and more diverse ways of thinking. Much of the original thinking about artificial intelligence, including how to define it, originated at an influential meeting in the 1950s at Dartmouth College in New Hampshire. “This definition of intelligence comes from the Dartmouth conference,” says Hall. “Which by the way were all men.
topics:

Leave a Reply