It’s in the news right now that a defendant will rely on an assist from an artificial intelligence project designed to level the playing field between ordinary people and large institutions. The unnamed defendant will get advice on what to say from a GPT-powered smartphone app that will feed him advice straight to bluetooth headphones.
While I can’t comment on the merits of this particular AI app or its likely long-term development, and I have some questions about where such technology could go, I believe that if properly designed it would align with one of my goals for AI progress: protecting individuals.
As I’ve noted in Futures of Artificial Intelligence, many of our digital technologies were designed with the wrong ethos. We used the Internet to help people be heard without asking ourselves how would people shelter themselves from others’ voices. Those who are subject to insults, threats, harassment, doxxing, etc. are largely unable to filter out the worst behavior of others. If I decide that I want everyone to hear me say terrible things about another person, that can be accomplished. But if people want to say terrible things about me, it’s hard for me to find safety from that. We should have an ethos that leads us to build technologies to empower our safety and security before they empower us to extend our presence. We need shields before we need longer arms!
[Web advertising presents the same problem though less ethically fraught … the add-ons and apps to cull the throng of advertisements that now pop up on every page, reminiscent of the early days when thirty little ad windows would pop up in front of you, are fighting a losing battle against the aggressively loud pop up ads, drop down ads, video windows, etc.]
So AI that helps an individual in conflict with forces generally outside that person’s control seems like a pretty good idea. As noted by the CEO, individuals can rarely fight on fair legal ground with company lawyers. Individuals simply cannot afford to pile up legal fees even when they are correct. Companies can afford to lose money on a case if it buries all the potential future cases.
I do have some quick, off-the-top-of-my-head questions about this though. And I might come up with more.
- What happens when companies can simply afford to access better machine learning opportunities than those afforded by a widely distributed technology? Seems like it could be a kick the can scenario.
- What happens if two individuals pit the app against itself?
Thanks to DALL-E for this “robot laywer in the style of Salvador Dali.” Thanks also to Dali and anyone else this image relied upon to build this image. I’m still wondering how we can solve ethical concerns inherent in these AI image generators…feel free to leave a comment on robot lawyers, robot art, and the ethics of our artificially intelligent age.