In a whirlwind of great press (all publicity must be good publicity, right?), Google launched an AI ethics review board and–within just one week–disbanded it in the face of great criticism. Apparently, someone at Google was wise enough to know that some outside voices might help the company navigate past the Scylla of Skynet and Charybdis of surveillance domination. Alas, no one was wise enough to construct a group of advisors or a system of advising that would make sense. The Verge reports that the primary complaint regarded Kay Coles James of the extremely conservative Heritage Foundation; but as Vox reports, the inclusion of a CEO who builds drones surely wasn’t helping and one board member reported knowing “worse” about an unnamed board member than James’s affiliation with anti-climate change nonsense and ethically problematic (deeply problematic!) attacks on the LGTBQ community.
Certainly Google was right to go looking for a diversity of political perspectives, though why James would be better than a respected thinker like Francis Fukuyama escapes me entirely. I disagree with much of what Fukuyama has written, but can’t deny that he’s worth reading. And diversity can’t defend bringing people on board with a conflict of interest, such as those who make money off of military AI (which is not to necessarily state that military AI is a problem in and of itself–that remains to be discussed also!). In her Vox article, Kelsey Piper rightly points out that Google’s board was constructed in ways practically guaranteed to fail. One sad upshot here is that the Heritage Foundation gets to reaffirm (at least internally) the silly notion that conservative voices are under attack and are excluded from the public sphere when in fact their voices are typically the loudest–not to mention best represented by our unrepresentative legislative branch. It’s vital to keep in mind that vicious voices should always be excluded from the public sphere, as should inane ones. If a “think tank” wants to de-legitimize the humanity of people and undercut their rights or play cheerleader to corporate and political interests that will literally leave billions of people scrambling for water and food, then they shouldn’t get platforms to authorize their counterproductive and anti-human perspectives. There are conservative thinkers who can debate policy efficacy without dredging the depths of ignorance. These are the kinds of voices that should be included when we seek diversity of opinions.
Helpfully, the MIT Technology Review gathered input from a host of reasonable folks and offered useful comments on what would have been better. There’s a series of good suggestions to Google there, and I need not repeat them here. I want to add only that if Google really wants to build an ethical engine for guiding its practices, it’s going to also need to think about the cultural matrix that quietly undergirds so much of AI in the world today. That’s not a once-in-four-months unpaid position, so Google’s going to need to up its game. In my own work, especially Apocalyptic AI, I’ve engaged religious currents that govern tech ideology. I’ve got a new, comparative book in the works to enrich this effort. This is just one of the cultural streams at play in artificial intelligence and robotics, but until it and similar value systems are better understood I don’t think we’ll make much progress in the ethics of AI.
General Atomics MQ-1 Predator image: By Lt. Col. Leslie Pratt – afrc.af.mil, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68261178
2 thoughts on “Google, AI, Ethics, and Reality”