Here we are, midway through 2025, and there have been LOTS of folks who’ve pointed to sexism and racism in our engagement with AI. There the Unmasking AI book by Joy Buolamwini (who also wrote about sexism and racism in AI in Time). Artificial Whiteness by Yordan Katz, Algorithms of Oppression by Sofia Noble, and more. There are so.many.essays on this. And that’s not to mention the enormous number of essays that engage sexism in science fiction, science more broadly, and the media representation of women. So, there’s like 18 gazillion academic and popular writings on the representation of women and the likelihood of misogynistic representation.
And yet.
Let’s get a quick bias out of the way: I am completely biased against Geoffrey Hinton and his nonsense warnings about AI. I don’t overly much care for the opinion of anyone who believes his lifework is now dangerous after it made him a millionaire and who has literally stated that “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” Maybe if he now donated all his money to charities that promote human flourishing, I’d care what he thinks. But he doesn’t (as far as I know) and I don’t. So, there’s my bias on the table.

So when I look at this on LinkedIn, an attempt to make Hinton look like a concerned grandfather, I care little for his opinion and a whole, awful lot for why the robot in the top left looks like a malevolent woman. Why doesn’t the robot have his face? Why is AI, now rendered as an existential threat to humanity, suddenly female?
I’m confident that if we refuse to be good to one another when we talk about AI, we aren’t going to build AI to be good.